Information

Name/term for mechanisms by wich the relative size/number of cells of some tissue/organ are preserved


The cells of some organ or tissue are dividing and also dies (apoptosis). But this happens in somehow controlled manner so that the total size of the organ is approximately preserved or the total number of cells approximately stays the same. As far as I understand, this is somehow collective process, because cells in one side of the tissues should somehow coordinate the division/apoptosis processes with the cells of the far side. I have read in one book about cells, that such controlling mechanisms are purely undesrtood even today. Of course, such mechanisms work only approximately (or - to be more precise - they are more complex than just static preservation of the size/number), e.g. numberadipose cells can increase with time.

But anyway - my question is - what is the name/term for such controlling mechanisms and what are other important keywords/terms which I can use for searching more research papers about this theme?

I have specific interest in the control of those processes, e.g., with the aim to bound those processes for adipose cells and to encourage those processes for the muscle cell.

"cell cycle control" and Hippo-pathways mechanisms can be the answers, but I am still searching for the matter.


'Eutely' is the term used for organisms with a fixed number of somatic cells. I'm not aware of any term for sub-organismic structures.


You are referring to organ “scaling” and “allometry”.


Plants Growth: Characteristics, Development, Phases and Factors

Growth is the manifestation of life. All organisms, the simplest as well as the most intricate, are slowly changing the whole time they are alive. They transform material into more of themselves.

From such ingredients as minerals, proteins, carbohydrates, fats, vitamins, hormones etc., organisms form additional protoplasm. The formation of protoplasm is called assimilation.

A large part of the food which a plant manufactures is used as a source of energy. Food may be consumed soon after it is produced, or it may be stored and used as a source of energy for the plant or its offspring weeks, months, or even years later.

A healthy plant, however, manufactures more food than is necessary to maintain the activities of its living substance, and the surplus may be built, more or less permanently, into its tissues, producing new protoplasm and new cell walls and thus promoting the growth of the plant body. Growth represents the excess of constructive over destructive metabolism.

Growth involves an irreversible increase in size which is usually, but not necessarily, accompanied by an increase in dry weight. The basic process of growth is the production of new protoplasm, which is clearly evident in the regions of active cell division.

The next stage in growth is increase in plant size, which is the result of absorption of water and the consequent stretching of the tissues, a process which in the strict sense is not growth at all, since it involves little or no increase in the characteristic material of the plant itself.

The third and the last stage in growth involves the entry of plenty of building materials, chiefly carbohydrates, into the expanded young tissues. This results in an increase in the dry weight but no visible increase in external size of the plant. Growth is, however, more than just an increasing amount of the plant. Differential growth of plant parts results in a characteristic shape. Each plant species has a distinctive form, development by growth patterns.

Differentiation:

Differentiation can be recognized at cell level, tissue level, organ level, and at the level of an organism. It becomes more obvious at the level of organ and organism. For instance, if we consider flower as an organ of plant, is bears sepals for photosynthesis and protection of inner floral parts beautiful, coloured petals to attract insects for cross-pollination stamens for producing male gametes and the carpels for bearing the ovules which after fertilization produce seeds.

Considering an angiosperm as an organism, we observe that it possesses the roots for absorption of water and minerals and fixation in the soil the trunk and stem branches bear leaves for photosynthesis, flowers and fruits the fruits for bearing the seeds which on germination form each a new plant.

Development:

Development implies a whole sequence of qualitative structural changes that a plant undergoes from the zygote stage to its death. The developmental changes may be gradual or abrupt. Examples of certain abrupt changes are germination, flowering and senescence (ageing leading to death).

Slow developmental changes include formation and maturation of tissues, formation of vegetative and floral buds and the formation of reproductive organs. Unlike growth, development is a qualitative change. It cannot be measured in quantitative terms, and is either described or illustrated with the help of photographs or drawings. Development includes growth (cell division, enlargement and differentiation), morphogenesis, maturation and senescence.

The growth cycle of annual, monocarpic, flowering plants (angiosperms) begins with the fertilized egg, the zygote. The zygote develops into an embryo following cell divisions and differentiation (embryonal stage). The embryo is enclosed within a seed where it undergoes a period of inactivity (dormancy). The resting embryo resumes growth during the germination of seed and develops into a seedling (seedling stage).

The seedling grows into a vegetative plant (vegetative phase). After some period of vegetative growth, the plant undergoes maturation and enters the reproductive phase. It develops flowers and fruits, the latter containing the seeds. Finally senescence sets in (senescence stage) leading to the death of the plant.

In unicellular organisms, growth consists of an increase in the size or volume (enlargement) of the cell. This increase is due to the synthesis of new protoplasm. Growth in unicellular organisms thus consists of single phase or step. Growth leads to maturation (“adults”) or full grown individuals. Cell division in unicellular organisms results in their multiplication or reproduction.

In simple multicellular organisms like Spirogyra, growth involves two phases or steps, cell division and enlargement. Cell division results in increase in the number of cells in the filamentous alga. The newly formed cells enlarge or increase in size. As a result, the filament of Spirogyra grows. In, flowering plants, however, growth involves three phases cell division, enlargement and differentiation.

Growth Regions in Animals and Plants:

Cell division and differentiation are important aspects of growth and development in both animals and plants. In mammals, the growth is diffuse and it is very difficult to specify the regions where the growth occurs. In animals, the growth of the embryo is completed quite early, although the mature size may be gained at specific periods.

In plants, the growth may be diffuse or localized. Diffuse growth occurs in lower forms of life i.e., filamentous algae. Here each cell of the multicellular plant body can divide and enlarge. The higher plants, especially, the trees are built up in a modular fashion i.e. their development is relatively open-ended and their structure never complete.

In such plants, the growth continues throughout with the new organs forming, replacing the old ones. Here the growth is localized i.e. growth is confined to certain specific regions, the growing points. Localized growth occurs due to the activity of a group of cells called the meristems. Depending upon the location of the meristems, the growth may be apical, intercalary and lateral.

Phases of Plant Growth:

As a plant is made up of cells, its growth will be the sum total of the growth of its cells.

The growth of cells involves three main phases:

(1) The phase of cell division (formative phase),

(2) Cell enlargement and cell differentiation.

(3) Cell Differentiation or Cell Maturation.

1. Phase of Cell Division (Formative Phase):

Cell division is the basic event for the growth of multicellular plants. All cells in an organism result from the division of pre-existing cells. The type of cell division that occurs during the growth of an organism is mitosis. It is a quantitative as well as qualitative division that is generally completed in two stages: the division of the nucleus (karyokinesis), followed by the division of the cytoplasm (cytokinesis).

During mitosis, the cell passes through prophase, metaphase, anaphase and telophase, resulting in equal distribution of the genetical material and the cytoplasm in each of the two daughter cells thus formed. Further, the daughter cells are genetically similar to the parent cell. As a result of this process, cells having the same genetic constitution get multiplied.

In higher plants, cell divisions continuously occur in the meristematic regions, such as apical meristem. As a result, an increase in the number of cells takes place in the meristematic region. Some of the daughter cells retain the meristematic activity, while others enter the next phase of growth— the phase of cell enlargement.

2. Phase of Cell Enlargement:

The cell enlargement plays an important role in contributing to the size of the tissue and organs. The enlargement occurs by synthesizing protoplasm, absorbing water (hydration), developing vacuoles and adding new cell wall material to the stretched, thin elastic walls to make them slightly thicker and permanent. Cell enlargement may be linear or in all directions.

3. Phase of Cell Differentiation or Cell Maturation:

During the last phase, the enlarged cells eventually acquire a specific size and form according to their location and role following biochemical, physiological and morphological changes, i.e., the cells undergo specialization or transformation. As a result, various kinds of cells get differentiated. These differentiated cells form different kinds of simple and complex tissues which perform different functions.

Experiment to Study Phases of Growth:

Germinate a few seeds of Pea or Bean in moist saw dust. Pick up a couple of seedlings with straight radicle of 2-3 cm length. Wash the seedlings. Blot the surface water. Mark the radicles from tip to base with 10-15 points at intervals of 2 mm with the help of water proof or India ink. As soon as the ink dries up, place the seedlings on moist blotting paper in a petri dish. Allow the seedling to grow for 1-2 days. Measure the intervals between the marks.

The increased growth per unit time is termed as growth rate. Thus, rate of growth can be expressed mathematically. An organism, or a part of the organism can produce more cells in a variety of ways. The growth rate shows an increase that may be arithmetic or geometrical (Figure 2.2).

In arithmetic growth, following mitotic cell division, only one daughter cell continues to divide while the other differentiates and matures. The simplest expression of arithmetic growth is exemplified by a root elongating at a constant rate. Look at (Fig. 2.3). On plotting the length of the organ against time, a linear curve is obtained.

Mathematically, it is expressed as:

r = growth rate/elongation per unit time.

Let us now see what happens in geometrical growth. In most systems, the initial growth is slow (lag phase), and it increases rapidly thereafter – at an exponential rate. Here, both the progeny cells following mitotic division retain the ability to divide and continue to do so (Fig. 2.4). Geometrical growth can be expressed by “Grand Period of Growth” (Fig. 2.5).

Quantitative comparisons between the growth of living system can also be made in two ways:

(i) Measurement and the comparison of total growth per unit time is called the absolute growth rate,

(ii) The growth of the given system per unit time expressed on a common basis, e.g., per unit initial parameter is called the relative growth rate.

In Figure 2.6 two leaves, A and B, are drawn that are of different sizes but shows absolute increase in area in the given time to give leaves, A, and B’. However, one of them shows much higher relative growth rate. Which one and why?

The Grand Period of Growth:

The vegetative growth of most plants in general shows three phases, starting slowly, becoming gradually faster and finally slowing again. These three phases, which are together known as “grand period of growth”, cover the whole of the vegetative history of an annual plant. In a perennial plant such a grand period of growth is repeated annually with periods of dormancy between the repetitions.

In order to explain the grand period of growth, a graph may be drawn between the duration of growth and increase in the dry weight of the plant. It is graphically represented by a ‘S’-shaped curve (a sigmoid curve) (Fig. 2.5). These variations in growth occur due to several external and internal factors.

The sigmoid curve shows following three distinct phases:

(1) The lag phase or initial phase:

It represents initial stages of growth. The rate of growth is naturally slow during this phase.

(2) Log phase or exponential phase:

It is the period of maximum and rapid growth. Physiological activities of cells are at their maximum.

Here, both the progeny cells following mitotic cell division retain the ability to divide and continue to do so. However, with limited nutrient supply, the growth slows down leading to a stationary phase_

The exponential growth can be expressed as

W1 = final size (weight, height, number etc.)

W0 = initial size at the beginning of the period

e = base of natural logarithms

Here, r is the relative growth rate and is also the measure of the ability of the plant to produce new plant material, referred to as efficiency index. Hence, the final size of W1 depends on the initial size, W0.

(3) Adult phase or stationary phase:

This phase is characterized by a decreasing growth rate. The plant reaches maturity, hence the physiological activity of cells also slows down and plant begins to senesce.

Factors Affecting Plant Growth:

(I) External Factors:

Regardless of the habitat in which a plant is growing, it is continuously subjected to the variability’s of a complex set of environmental factors. Environmental factors play an important role in the growth and development of any plant. Important among these environmental factors are temperature, light, oxygen, water and nutrients.

Temperature is one of the most important environmental factors that effect the growth of any plant. However, the minimum, optimum and maximum limits of temperature for growth vary from species to species. For instance, the winter cereals make some growth at temperatures of 34° to 40°F, whereas in that temperature range pumpkins and melons do not grow ,it all.

As the temperature increases above the minimum, growth is accelerated until a certain optimum temperature is attained, above which it becomes slower and ultimately completely retarded. The optimum temperature greatly varies with the species of plant it also varies with the age of the plant. The optimum temperatures for the growth of tropical plants are higher than the temperate ones.

Arctic and alpine species may grow at the freezing point or even at a temperature slightly below the freezing point. Their optimum temperature is usually no higher than 10°C. The optimum temperature for most of the tropical species varies from 30° to 35°C, and for temperate species it usually varies from 25° to 30°C.

Effect of duration for which a plant is exposed to a particular temperature also varies with the species. For instance, a plant may make considerable growth if exposed to a temperature of 86°F for a short duration—the same temperature has deleterious effects on growth if maintained for a longer duration.

Soil temperature also greatly influences the growth of roots and shoots. Under natural conditions, temperature is a cyclic environmental factor. Normally the temperatures of day and night greatly vary and with only few exceptions plants grow better when night temperatures are lower than day temperatures. Sometimes, the term thermo-periodicity is used to designate the effects of an alternation of temperature between the day and night upon the growth and other reactions of the plants.

Light is another important factor that variously effects the growth and development of all plants. Light intensity, quality of light and duration of light affect growth through several ways. It greatly influences several important physiological processes like chlorophyll synthesis, stomata movements, photosynthesis, formation of anthocyanin, temperature of aerial organs, absorption of minerals, permeability, rate of transpiration, streaming of protoplasm etc.

(i) Intensity of light:

The intensity of light greatly influences plant growth. Variations in the intensity of sunlight are always invariably associated with changes in the quality of light, and under natural conditions, variations in light intensity have more significant effects upon growth pattern of plants than changes in the quality of light. Most crops and ornamental plants, for instance, wheat, corn, peas, tobacco make vigorous and stocky growth and flower profusely with full sun. Such plants are called “sun plant”.

When grown with intermediate light intensities, sun plants become taller and have larger, thinner leaves, but fewer flowers. They make very poor growth in low light intensity. Shirley (1929, 1935), however, observed in a number of plant species that the absolute weight, percentage of dry matter in the tops, thickness and rigidity of the stem and leaf thickness all increase with increase in the light intensity up to full sun light, provided no other factor is limiting. Low light intensity results in poor flower development and consequently very poor fruit setting.

(ii) Quality of light:

Different wave lengths of sun light have significant effects upon the growth of plants. Most of the experiments conducted in this direction indicate that overall development of a plant and increase in its dry weight take place most effectively in the full spectrum of visible light. Plants grown in blue and violet light tend to be dwarf, those in red light, tall and spindly. The ultraviolet and infra red radiations of sunlight do not promote growth.

Overall growth of plant in green light is much less than in either blue-violet or orange red portions of the spectrum. This effect of green light is partly due to lower efficiency of photosynthesis in the green light. Different wave lengths of sun light do not have uniform effects on different organs of a plant. For instance, orange-red light generally results in poor development of stems and hypocotyls.

Greatest elongation of stems and hypocotyls in most of the plants takes place in blue-violet portion of the spectrum, less in the green and still less in the orange-red and least in the complete spectrum of visible light. On the other hand maximum expansion of leaf blades occurs in the full spectrum of visible light and least in the green.

(iii) Duration of light:

Duration, intensity and quality of light have marked influence on the rate of photosynthesis and hence the rate of growth. During winters, when the days are short, plants grow slowly as the days get longer toward spring, growth is accelerated.

Duration of light not only affects photosynthesis but also greatly influences dormancy and flowering in plants. The short days of autumn bring about retardation of growth in many plants, a phenomenon not related to photosynthesis. A number of trees respond to the short days of autumn by ceasing to grow and becoming dormant.

The length of day has a marked influence on flowering. Plants, according to their requirement of light for flowering, are classified as-long-day plants, short-day plants and day-neutral plants. The long-day plants in general flower when the days are longer than 13 or 14 hour (depending upon the species), while the short-day plants produce flowers when the days are shorter than 13 or 14 hours. Flowering in the day-neutral plants is not affected by the length of the day. They can very well flower under both short and long day conditions.

With the exception of only those plants which are native to marshy and boggy terrains, the growth of all terrestrial plants is greatly retarded in poorly aerated soils. Usually the shoots of plants receive an ample supply of oxygen, but the roots may or may not get sufficient oxygen to grow and function normally. Plants in flooded fields or in water logged pots do not thrive due to marked deficiencies in soil aeration. The retarded growth of plants in poorly aerated soils is chiefly due to reduced absorption of minerals and water.

Water is one of the most essential requirements for growth of a plant. With an inadequate water supply, growth is poor and yields low. Plants grow well when ample but not excessive moisture is available. For most of the plants a soil-water content in the capacity to just above the wilting percentage is most favorable tor good growth.

With a decrease in the soil-water content, marked effects on growth do not appear until the permanent wilting percentage is reached. At the permanent wilting percentage all growth ceases. If the soil is continuously above field capacity, as it may be in poorly drained fields, plants grow slowly because roots are deprived of oxygen.

Plants vary in their response to moisture deficiency. For instance, radishes, spinach and peppers wilt and cease to grow when soil-water percentage is low. Cucurbits and tomatoes in the field stop growing and their lower leaves respond by changing from a light green colour to a darker green or bluish colour. The leaves of corn and many grasses curl when the water supply is inadequate.

Deficient soil-water supply may affect the growth of a plant more at certain stages in its development than others. Vegetative growth in many plants is checked but the development of reproductive organs is not affected under deficient soil-water supply.

The quantity and nature of soil nutrients have marked influence on the growth and development of plants. For luxuriant growth of any crop, field should be adequately rich in nutrients (both micro and macronutrients). Furthermore, these mineral nutrients do not effect the growth as such, but only when present in the form of ions, or as constituents of molecules.

II. Internal Factors:

(1) Growth Regulators:

Several classes of growth regulators are known. While some growth regulators are growth promoting (e.g., auxins, gibberellins, cytokinins, florigen etc.), others are growth inhibitors (e.g., abscisic acid, ethylene, chlorocholine chloride). Many of them are synthesized by the plants, while few of them are synthetic.

The ratio of carbohydrates and nitrogenous compounds governs the pattern of growth. Presence of more carbohydrates compared to nitrogenous compounds favours good vegetative growth, flowering and fruiting. On the contrary, presence of more nitrogenous compounds compared to carbohydrates results in poor vegetative growth, flowering and fruiting.

(3) Genotype and Genetic Factor:

All metabolic activities, growth and development are under the control of genetic complement (genotype), nuclear, as well as extra nuclear, of the cell. Expression of appropriate genes in an appropriate sequence is controlled both by genes and the environment. The genes, located in chromosomes, transcribe information to m-RNA which translates it into structural and enzymic proteins.


Contents

Extant species

Image Common name Scientific name Distribution
Bactrian camel Camelus bactrianus domesticated Central Asia, including the historical region of Bactria.
Dromedary / Arabian camel Camelus dromedarius domesticated the Middle East, Sahara Desert, and Afghanistan introduced to Australia
Wild Bactrian camel Camelus ferus Remote areas of northwest China and Mongolia

The average life expectancy of a camel is 40 to 50 years. [12] A full-grown adult dromedary camel stands 1.85 m (6 ft 1 in) at the shoulder and 2.15 m (7 ft 1 in) at the hump. [13] Bactrian camels can be a foot taller. Camels can run at up to 65 km/h (40 mph) in short bursts and sustain speeds of up to 40 km/h (25 mph). [14] Bactrian camels weigh 300 to 1,000 kg (660 to 2,200 lb) and dromedaries 300 to 600 kg (660 to 1,320 lb). The widening toes on a camel's hoof provide supplemental grip for varying soil sediments. [15]

The male dromedary camel has an organ called a dulla in its throat, a large, inflatable sac he extrudes from his mouth when in rut to assert dominance and attract females. It resembles a long, swollen, pink tongue hanging out of the side of its mouth. [16] Camels mate by having both male and female sitting on the ground, with the male mounting from behind. [17] The male usually ejaculates three or four times within a single mating session. [18] Camelids are the only ungulates to mate in a sitting position. [19]

Ecological and behavioral adaptations

Camels do not directly store water in their humps they are reservoirs of fatty tissue. When this tissue is metabolized, it yields more than one gram of water for every gram of fat processed. This fat metabolization, while releasing energy, causes water to evaporate from the lungs during respiration (as oxygen is required for the metabolic process): overall, there is a net decrease in water. [20] [21]

Camels have a series of physiological adaptations that allow them to withstand long periods of time without any external source of water. [23] The dromedary camel can drink as seldom as once every 10 days even under very hot conditions, and can lose up to 30% of its body mass due to dehydration. [24] Unlike other mammals, camels' red blood cells are oval rather than circular in shape. This facilitates the flow of red blood cells during dehydration [25] and makes them better at withstanding high osmotic variation without rupturing when drinking large amounts of water: a 600 kg (1,300 lb) camel can drink 200 L (53 US gal) of water in three minutes. [26] [27]

Camels are able to withstand changes in body temperature and water consumption that would kill most other mammals. Their temperature ranges from 34 °C (93 °F) at dawn and steadily increases to 40 °C (104 °F) by sunset, before they cool off at night again. [23] In general, to compare between camels and the other livestock, camels lose only 1.3 liters of fluid intake every day while the other livestock lose 20 to 40 liters per day (Breulmann, et al., 2007). [28] Maintaining the brain temperature within certain limits is critical for animals to assist this, camels have a rete mirabile, a complex of arteries and veins lying very close to each other which utilizes countercurrent blood flow to cool blood flowing to the brain. [29] Camels rarely sweat, even when ambient temperatures reach 49 °C (120 °F). [30] Any sweat that does occur evaporates at the skin level rather than at the surface of their coat the heat of vaporization therefore comes from body heat rather than ambient heat. Camels can withstand losing 25% of their body weight to sweating, whereas most other mammals can withstand only about 12–14% dehydration before cardiac failure results from circulatory disturbance. [27]

When the camel exhales, water vapor becomes trapped in their nostrils and is reabsorbed into the body as a means to conserve water. [31] Camels eating green herbage can ingest sufficient moisture in milder conditions to maintain their bodies' hydrated state without the need for drinking. [32]

The camel's thick coat insulates it from the intense heat radiated from desert sand a shorn camel must sweat 50% more to avoid overheating. [33] During the summer the coat becomes lighter in color, reflecting light as well as helping avoid sunburn. [27] The camel's long legs help by keeping its body farther from the ground, which can heat up to 70 °C (158 °F). [34] [35] Dromedaries have a pad of thick tissue over the sternum called the pedestal. When the animal lies down in a sternal recumbent position, the pedestal raises the body from the hot surface and allows cooling air to pass under the body. [29]

Camels' mouths have a thick leathery lining, allowing them to chew thorny desert plants. Long eyelashes and ear hairs, together with nostrils that can close, form a barrier against sand. If sand gets lodged in their eyes, they can dislodge it using their transparent third eyelid. The camels' gait and widened feet help them move without sinking into the sand. [34] [36] [37]

The kidneys and intestines of a camel are very efficient at reabsorbing water. Camels' kidneys have a 1:4 cortex to medulla ratio. [38] Thus, the medullary part of a camel's kidney occupies twice as much area as a cow's kidney. Secondly, renal corpuscles have a smaller diameter, which reduces surface area for filtration. These two major anatomical characteristics enable camels to conserve water and limit the volume of urine in extreme desert conditions. [39] Camel urine comes out as a thick syrup, and camel faeces are so dry that they do not require drying when the Bedouins use them to fuel fires. [40] [41] [42] [43]

The camel immune system differs from those of other mammals. Normally, the Y-shaped antibody molecules consist of two heavy (or long) chains along the length of the Y, and two light (or short) chains at each tip of the Y. Camels, in addition to these, also have antibodies made of only two heavy chains, a trait that makes them smaller and more durable. These "heavy-chain-only" antibodies, discovered in 1993, are thought to have developed 50 million years ago, after camelids split from ruminants and pigs. [44] Camels suffer from surra caused by Trypanosoma evansi wherever camels are domesticated in the world, [45] and resultantly camels have evolved trypanolytic antibodies as with many mammals. In the future, nanobody/single-domain antibody therapy will surpass natural camel antibodies by reaching locations currently unreachable due to natural antibodies' larger size. Such therapies may also be suitable for other mammals. [46]

Genetics

The karyotypes of different camelid species have been studied earlier by many groups, [47] [48] [49] [50] [51] [52] but no agreement on chromosome nomenclature of camelids has been reached. A 2007 study flow sorted camel chromosomes, building on the fact that camels have 37 pairs of chromosomes (2n=74), and found that the karyotype consisted of one metacentric, three submetacentric, and 32 acrocentric autosomes. The Y is a small metacentric chromosome, while the X is a large metacentric chromosome. [53]

The hybrid camel, a hybrid between Bactrian and dromedary camels, has one hump, though it has an indentation 4–12 cm (1.6–4.7 in) deep that divides the front from the back. The hybrid is 2.15 m (7 ft 1 in) at the shoulder and 2.32 m (7 ft 7 in) tall at the hump. It weighs an average of 650 kg (1,430 lb) and can carry around 400 to 450 kg (880 to 990 lb), which is more than either the dromedary or Bactrian can. [54]

According to molecular data, the wild Bactrian camel (C. ferus) separated from the domestic Bactrian camel (C. bactrianus) about 1 million years ago. [55] [56] New World and Old World camelids diverged about 11 million years ago. [57] In spite of this, these species can hybridize and produce viable offspring. [58] The cama is a camel-llama hybrid bred by scientists to see how closely related the parent species are. [59] Scientists collected semen from a camel via an artificial vagina and inseminated a llama after stimulating ovulation with gonadotrophin injections. [60] The cama is halfway in size between a camel and a llama and lacks a hump. It has ears intermediate between those of camels and llamas, longer legs than the llama, and partially cloven hooves. [61] [62] Like the mule, camas are sterile, despite both parents having the same number of chromosomes. [60]

Evolution

The earliest known camel, called Protylopus, lived in North America 40 to 50 million years ago (during the Eocene). [18] It was about the size of a rabbit and lived in the open woodlands of what is now South Dakota. [63] [64] By 35 million years ago, the Poebrotherium was the size of a goat and had many more traits similar to camels and llamas. [65] [66] The hoofed Stenomylus, which walked on the tips of its toes, also existed around this time, and the long-necked Aepycamelus evolved in the Miocene. [67]

The ancestor of modern camels, Paracamelus, migrated into Eurasia from North America via Beringia during the late Miocene, between 7.5 and 6.5 million years ago. [68] [69] [70] Around 3–5 million years ago, the North American Camelidae spread to South America as part of the Great American Interchange via the newly formed Isthmus of Panama, where they gave rise to guanacos and related animals, and to Asia via the Bering land bridge. [18] [63] [64] Paracamelus continued to exist in the Canadian high Arctic into the Pleistocene, around 1 million years ago. [71] [72] This creature is estimated to have stood around nine feet (2.7 metres) tall. [73] The Bactrian camel diverged from the dromedary about 1 million years ago, according to the fossil record. [74]

The last camel native to North America was Camelops hesternus, which vanished along with horses, short-faced bears, mammoths and mastodons, ground sloths, sabertooth cats, and many other megafauna, coinciding with the migration of humans from Asia. [75] [76]

Like horses, camels originated in North America and eventually spread across Beringia to Asia. They survived in the Old World, and eventually humans domesticated them and spread them globally. Along with many other megafauna in North America, the original wild camels were wiped out during the spread of the first indigenous peoples of the Americas from Asia into North America, 10 to 12,000 years ago although fossils have never been associated with definitive evidence of hunting. [75] [76]

Most camels surviving today are domesticated. [43] [77] Although feral populations exist in Australia, India and Kazakhstan, wild camels survive only in the wild Bactrian camel population of the Gobi Desert. [12]

History

When humans first domesticated camels is disputed. The first domesticated dromedaries may have been in southern Arabia around 3000 BCE or as late as 1000 BCE, and Bactrian camels in central Asia around 2500 BCE, [18] [78] [79] [80] [81] as at Shahr-e Sukhteh (also known as the Burnt City), Iran. [82]

Martin Heide's 2010 work on the domestication of the camel tentatively concludes that humans had domesticated the Bactrian camel by at least the middle of the third millennium somewhere east of the Zagros Mountains, with the practice then moving into Mesopotamia. Heide suggests that mentions of camels "in the patriarchal narratives may refer, at least in some places, to the Bactrian camel", while noting that the camel is not mentioned in relationship to Canaan. [83]

Recent excavations in the Timna Valley by Lidar Sapir-Hen and Erez Ben-Yosef discovered what may be the earliest domestic camel bones yet found in Israel or even outside the Arabian Peninsula, dating to around 930 BC. This garnered considerable media coverage, as it is strong evidence that the stories of Abraham, Jacob, Esau, and Joseph were written after this time. [84] [85]

The existence of camels in Mesopotamia—but not in the eastern Mediterranean lands—is not a new idea. The historian Richard Bulliet did not think that the occasional mention of camels in the Bible meant that the domestic camels were common in the Holy Land at that time. [86] The archaeologist William F. Albright, writing even earlier, saw camels in the Bible as an anachronism. [87]

The official report by Sapir-Hen and Ben-Joseph notes:

The introduction of the dromedary camel (Camelus dromedarius) as a pack animal to the southern Levant . substantially facilitated trade across the vast deserts of Arabia, promoting both economic and social change (e.g., Kohler 1984 Borowski 1998: 112–116 Jasmin 2005). This . has generated extensive discussion regarding the date of the earliest domestic camel in the southern Levant (and beyond) (e.g., Albright 1949: 207 Epstein 1971: 558–584 Bulliet 1975 Zarins 1989 Köhler-Rollefson 1993 Uerpmann and Uerpmann 2002 Jasmin 2005 2006 Heide 2010 Rosen and Saidel 2010 Grigson 2012). Most scholars today agree that the dromedary was exploited as a pack animal sometime in the early Iron Age (not before the 12th century [BC])

Current data from copper smelting sites of the Aravah Valley enable us to pinpoint the introduction of domestic camels to the southern Levant more precisely based on stratigraphic contexts associated with an extensive suite of radiocarbon dates. The data indicate that this event occurred not earlier than the last third of the 10th century [BC] and most probably during this time. The coincidence of this event with a major reorganization of the copper industry of the region—attributed to the results of the campaign of Pharaoh Shoshenq I—raises the possibility that the two were connected, and that camels were introduced as part of the efforts to improve efficiency by facilitating trade. [85]

A camel serving as a draft animal in Pakistan (2009)

A camel in a ceremonial procession, its rider playing kettledrums, Mughal Empire (c. 1840)

Petroglyph of a camel, Negev, southern Israel (prior to c. 5300 BC)

Joseph Sells Grain by Bartholomeus Breenbergh (1655), showing camel with rider at left

Textiles

Desert tribes and Mongolian nomads use camel hair for tents, yurts, clothing, bedding and accessories. Camels have outer guard hairs and soft inner down, and the fibers are sorted [ by whom? ] by color and age of the animal. The guard hairs can be felted for use as waterproof coats for the herdsmen, while the softer hair is used for premium goods. [88] The fiber can be spun for use in weaving or made into yarns for hand knitting or crochet. Pure camel hair is recorded as being used for western garments from the 17th century onwards, and from the 19th century a mixture of wool and camel hair was used. [89]

Military uses

By at least 1200 BC the first camel saddles had appeared, and Bactrian camels could be ridden. The first saddle was positioned to the back of the camel, and control of the Bactrian camel was exercised by means of a stick. However, between 500 and 100 BC, Bactrian camels came into military use. New saddles, which were inflexible and bent, were put over the humps and divided the rider's weight over the animal. In the seventh century BC the military Arabian saddle evolved, which again improved the saddle design slightly. [90] [91]

Military forces have used camel cavalries in wars throughout Africa, the Middle East, and into the modern-day Border Security Force (BSF) of India (though as of July 2012, the BSF planned the replacement of camels with ATVs). The first documented use of camel cavalries occurred in the Battle of Qarqar in 853 BC. [92] [93] [94] Armies have also used camels as freight animals instead of horses and mules. [95] [96]

The East Roman Empire used auxiliary forces known as dromedarii, whom the Romans recruited in desert provinces. [97] [98] The camels were used mostly in combat because of their ability to scare off horses at close range (horses are afraid of the camels' scent), [19] a quality famously employed by the Achaemenid Persians when fighting Lydia in the Battle of Thymbra (547 BC). [54] [99] [100]

19th and 20th centuries

The United States Army established the U.S. Camel Corps, stationed in California, in the late 19th century. [19] One may still see stables at the Benicia Arsenal in Benicia, California, where they nowadays serve as the Benicia Historical Museum. [101] Though the experimental use of camels was seen as a success (John B. Floyd, Secretary of War in 1858, recommended that funds be allocated towards obtaining a thousand more camels), the outbreak of the American Civil War in 1861 saw the end of the Camel Corps: Texas became part of the Confederacy, and most of the camels were left to wander away into the desert. [96]

France created a méhariste camel corps in 1912 as part of the Armée d'Afrique in the Sahara [102] in order to exercise greater control over the camel-riding Tuareg and Arab insurgents, as previous efforts to defeat them on foot had failed. [103] The Free French Camel Corps fought during World War II, and camel-mounted units remained in service until the end of French rule over Algeria in 1962. [104]

In 1916, the British created the Imperial Camel Corps. It was originally used to fight the Senussi, but was later used in the Sinai and Palestine Campaign in World War I. The Imperial Camel Corps comprised infantrymen mounted on camels for movement across desert, though they dismounted at battle sites and fought on foot. After July 1918, the Corps began to become run down, receiving no new reinforcements, and was formally disbanded in 1919. [105]

In World War I, the British Army also created the Egyptian Camel Transport Corps, which consisted of a group of Egyptian camel drivers and their camels. The Corps supported British war operations in Sinai, Palestine, and Syria by transporting supplies to the troops. [106] [107] [108]

The Somaliland Camel Corps was created by colonial authorities in British Somaliland in 1912 it was disbanded in 1944. [109]

Bactrian camels were used by Romanian forces during World War II in the Caucasian region. [110] At the same period the Soviet units operating around Astrakhan in 1942 adopted local camels as draft animals due to shortage of trucks and horses, and kept them even after moving out of the area. Despite severe losses, some of these camels came as far West as to Berlin itself. [111]

The Bikaner Camel Corps of British India fought alongside the British Indian Army in World Wars I and II. [112]

The Tropas Nómadas (Nomad Troops) were an auxiliary regiment of Sahrawi tribesmen serving in the colonial army in Spanish Sahara (today Western Sahara). Operational from the 1930s until the end of the Spanish presence in the territory in 1975, the Tropas Nómadas were equipped with small arms and led by Spanish officers. The unit guarded outposts and sometimes conducted patrols on camelback. [113] [114]

Food uses

Dairy

Camel milk is a staple food of desert nomad tribes and is sometimes considered a meal itself a nomad can live on only camel milk for almost a month. [19] [40] [115] [116]

Camel milk can readily be made into yogurt, but can only be made into butter if it is soured first, churned, and a clarifying agent is then added. [19] Until recently, camel milk could not be made into camel cheese because rennet was unable to coagulate the milk proteins to allow the collection of curds. [117] Developing less wasteful uses of the milk, the FAO commissioned Professor J.P. Ramet of the École Nationale Supérieure d'Agronomie et des Industries Alimentaires, who was able to produce curdling by the addition of calcium phosphate and vegetable rennet in the 1990s. [118] The cheese produced from this process has low levels of cholesterol and is easy to digest, even for the lactose intolerant. [119] [120]

Camel milk can also be made into ice cream. [121] [122]

They provide food in the form of meat and milk. [123] Approximately 3.3 million camels and camelids are slaughtered each year for meat worldwide. [124] A camel carcass can provide a substantial amount of meat. The male dromedary carcass can weigh 300–400 kg (661–882 lb), while the carcass of a male Bactrian can weigh up to 650 kg (1,433 lb). The carcass of a female dromedary weighs less than the male, ranging between 250 and 350 kg (550 and 770 lb). [18] The brisket, ribs and loin are among the preferred parts, and the hump is considered a delicacy. [125] The hump contains "white and sickly fat", which can be used to make the khli (preserved meat) of mutton, beef, or camel. [126] On the other hand, camel milk and meat are rich in protein, vitamins, glycogen, and other nutrients making them essential in the diet of many people. From chemical composition to meat quality, the dromedary camel is the preferred breed for meat production. It does well even in arid areas due to its unusual physiological behaviors and characteristics, which include tolerance to extreme temperatures, radiation from the sun, water paucity, rugged landscape and low vegetation. [127] Camel meat is reported to taste like coarse beef, but older camels can prove to be very tough, [13] [18] although camel meat becomes tenderer the more it is cooked. [128] The Abu Dhabi Officers' Club serves a camel burger mixed with beef or lamb fat in order to improve the texture and taste. [129] In Karachi, Pakistan, some restaurants prepare nihari from camel meat. [130] Specialist camel butchers provide expert cuts, with the hump considered the most popular. [131]

Camel meat has been eaten for centuries. It has been recorded by ancient Greek writers as an available dish at banquets in ancient Persia, usually roasted whole. [132] The Roman emperor Heliogabalus enjoyed camel's heel. [40] Camel meat is mainly eaten in certain regions, including Eritrea, Somalia, Djibouti, Saudi Arabia, Egypt, Syria, Libya, Sudan, Ethiopia, Kazakhstan, and other arid regions where alternative forms of protein may be limited or where camel meat has had a long cultural history. [18] [40] [125] Camel blood is also consumable, as is the case among pastoralists in northern Kenya, where camel blood is drunk with milk and acts as a key source of iron, vitamin D, salts and minerals. [18] [125] [133]

A 2005 report issued jointly by the Saudi Ministry of Health and the United States Centers for Disease Control and Prevention details four cases of human bubonic plague resulting from the ingestion of raw camel liver. [134]

Australia

Camel meat is also occasionally found in Australian cuisine: for example, a camel lasagna is available in Alice Springs. [132] [133] Australia has exported camel meat, primarily to the Middle East but also to Europe and the US, for many years. [135] The meat is very popular among North African Australians, such as Somalis, and other Australians have also been buying it. The feral nature of the animals means they produce a different type of meat to farmed camels in other parts of the world, [136] and it is sought after because it is disease-free, and a unique genetic group. Demand is outstripping supply, and governments are being urged not to cull the camels, but redirect the cost of the cull into developing the market. Australia has seven camel dairies, which produce milk, cheese and skincare products in addition to meat. [137]

Religion

Islam

Camel meat is halal (Arabic: حلال ‎, 'allowed') for Muslims. However, according to some Islamic schools of thought, a state of impurity is brought on by the consumption of it. Consequently, these schools hold that Muslims must perform wudhu (ablution) before the next time they pray after eating camel meat. [138] Also, some Islamic schools of thought consider it haram (Arabic: حرام ‎, 'forbidden') for a Muslim to perform Salat in places where camels lie, as it is said to be a dwelling place of the Shaytan (Arabic: شيطان ‎, 'Devil'). [138] According to Abu Yusuf, the urine of camel may be used for medical treatment if necessary, but according to Abū Ḥanīfah, the drinking of camel urine is discouraged. [139]

The Islamic texts contain several stories featuring camels. In the story of the people of Thamud, the Prophet Salih miraculously brings forth a naqat (Arabic: ناقة ‎, 'she-camel') out of a rock. After the Prophet Muhammad migrated from Mecca to Medina, he allowed his she-camel to roam there the location where the camel stopped to rest determined the location where he would build his house in Medina. [140]

Judaism

According to Jewish tradition, camel meat and milk are not kosher. [141] Camels possess only one of the two kosher criteria although they chew their cud, they do not possess cloven hooves: "But these you shall not eat among those that bring up the cud and those that have a cloven hoof: the camel, because it brings up its cud, but does not have a [completely] cloven hoof it is unclean for you." [142]

Depictions in culture

Shadda (cover,detail), Karabagh region, southwest Caucasus, early 19th century

Vessel in the form of a recumbent camel with jugs, 250 BC – 224 AD, Brooklyn Museum

Maru Ragini (Dhola and Maru Riding on a Camel), c. 1750, Brooklyn Museum

The Magi Journeying (Les rois mages en voyage)—James Tissot, c. 1886, Brooklyn Museum

There are around 14 million camels alive as of 2010 [update] , with 90% being dromedaries. [143] Dromedaries alive today are domesticated animals (mostly living in the Horn of Africa, the Sahel, Maghreb, Middle East and South Asia). The Horn region alone has the largest concentration of camels in the world, [22] where the dromedaries constitute an important part of local nomadic life. They provide nomadic people in Somalia [18] and Ethiopia with milk, food, and transportation. [116] [144] [145] [146]

Around 700,000 dromedary camels are now feral in Australia, descended from those introduced as a method of transport in the 19th and early 20th centuries. [133] [143] [147] This population is growing about 8% per year. [148] Representatives of the Australian government have culled more than 100,000 of the animals in part because the camels use too much of the limited resources needed by sheep farmers. [149]

A small population of introduced camels, dromedaries and Bactrians, wandered through Southwestern United States after having been imported in the 19th century as part of the U.S. Camel Corps experiment. When the project ended, they were used as draft animals in mines and escaped or were released. Twenty-five U.S. camels were bought and exported to Canada during the Cariboo Gold Rush. [96]

The Bactrian camel is, as of 2010 [update] , reduced to an estimated 1.4 million animals, most of which are domesticated. [43] [143] [150] The Wild Bactrian camel is a separate species and is the only truly wild (as opposed to feral) camel in the world. The wild camels are critically endangered and number approximately 1400, inhabiting the Gobi and Taklamakan Deserts in China and Mongolia. [12] [151]


Xylem

The xylem is responsible for keeping a plant hydrated. Xylem sap travels upwards and has to overcome serious gravitational forces to deliver water to a plant’s upper extremities, especially in tall trees.

Two different types of cells are known to form the xylem in different plant groups: tracheids and vessel elements. Tracheids are found in most gymnosperms, ferns, and lycophytes whereas vessel elements form the xylem of almost all angiosperms.

Xylem cells are dead, elongated and hollow. They have secondary cell walls and ‘pits’ (areas where the secondary cell wall is missing).

Tracheids

Tracheids are long thin cells that are connected together by tapered ends. The tapered ends run alongside each other and have pits that allow for water to travel from cell to cell.

Their secondary cell walls contain lignin – the compound that creates wood. The lignin in tracheids adds structural support to the xylem and the whole plant.

Vessel elements

Vessel elements are shorter and wider than tracheids and are connected together end-on-end. The ends of the cells contain what are known as ‘perforation plates’. The perforation plates have a number of holes in their cell walls which allows for water to travel freely between cells.


Drivers of Liver Fibrosis

Genetic Disorders

Several genetic diseases predispose the liver to fibrosis (Scorza et al., 2014). In all of these diseases, the initiation of fibrosis begins with tissue injury due to a consequence of the genetic defect followed by a fibrogenic wound healing response, as discussed above. Genetic causes for liver fibrosis have come into light due to the advancements in molecular genetic and imaging techniques. Several genetic polymorphisms summarized in Table 1, have been implicated in the occurrence of liver fibrosis, leading to cirrhosis (Pinzani and Vizzutti, 2005). Most of these mutations affect many different cell types but predispose the individual to liver fibrosis and in some cases, liver cirrhosis (Scorza et al., 2014). Many of the genes listed in Table 1, such as, ABCB4, ALDOB, GBE1, FAH, ASL, SLC25A13, and SERPINA1 are highly expressed in the liver and therefore, mutations in these genes, the liver is the organ which is most affected. Most genetic disorders that lead to cirrhosis manifest in childhood and are a leading cause of pediatric liver cirrhosis, apart from childhood obesity (Pinto et al., 2015). In addition to the genetic mutations that predispose individuals to hepatic fibrosis that appear in childhood, mutations of the PNPLA3 gene have been described as a major predisposing factor in non-alcoholic fatty liver disease (NAFLD) (Anstee et al., 2020). PNPLA3 encodes for Patatin-like phospholipase domain-containing protein 3 or adiponutrin and is abundantly expressed in hepatocytes, adipocytes as well as HSCs (Dong, 2019). The PNPLA3 I148M variant has been shown to have a positive association with hepatic fat content (steatosis), NAFLD, non-alcoholic steatohepatitis (NASH) as well as hepatocellular carcinoma (Dong, 2019). The global prevalence of NAFLD is about 25% and in obese individuals or in the presence of type 2 diabetes mellitus, it increases to about 60% (Younossi et al., 2016). Therefore, PNPLA3 gene is a strong predisposing genetic factor for hepatic fibrosis. Although the PNPLA3 protein has been shown to have triacylglycerol lipase and acylglycerol transacylase enzymatic activities, its exact role in hepatocytes have been controversial (Jenkins et al., 2004 Dong, 2019). Other studies have demonstrated a retinyl esterase activity for PNPLA3 (Pirazzi et al., 2014). HSCs are reservoirs for retinoic acid, which activate the retinoic acid receptor (RAR) mediated transcription which keeps fibrogenesis under control (Hellemans, et al., 1999 Hellemans et al., 2004 Wang et al., 2002). Mutations in the PNPLA3 gene that alter the retinyl esterase activity therefore, decrease the level of retinoic acid in the HSCs and therefore reduce the RAR mediated control of fibrogenesis in HSCs (Bruschi et al., 2017). However, it is now recognized that PNPLA3 has pleiotropic roles in the hepatocyte that are still under investigation such as in hepatocyte lipid droplet homeostasis, HSC quiescence and proliferation regulation (Dong, 2019). As several roles for PNPLA3 are suggested, PNPLA3 might be a good therapeutic target to control NAFLD related fibrosis and disease progression.

TABLE 1. Genetic causes predisposing the liver to fibrosis.

Alcohol

Excessive and continued alcohol intake over large periods of time, i.e., alcohol abuse, can lead to liver fibrosis followed by cirrhosis and liver cancer (Stickel et al., 2017). Alcoholic liver disease (ALD) comprises a spectrum of liver disorders ranging from fatty liver, steatosis, fibrosis with varying degrees of inflammation, cirrhosis. Alcohol abuse contributes to almost 50% of chronic liver disease related deaths globally (Rehm and Shield 2019). While the pathophysiology of alcohol induced cirrhosis is not completely understood, alcohol and its metabolic intermediates such as acetaldehyde are thought to play an important role in it. Alcohol is absorbed from the duodenum and upper jejunum by simple diffusion, reaching peak blood concentration by 20 min post ingestion after which it is quickly redistributed in vascular organs (Koob et al., 2014). Alcohol cannot be stored and needs to undergo obligatory oxidation which occurs predominantly in the liver (Figure 2) (Yang et al., 2019). The first step in alcohol oxidation converts alcohol into acetaldehyde. There are three enzymes in the liver that can carry out this reaction (i) alcohol dehydrogenase (ADH) which catalyzes the bulk of ethanol to acetaldehyde conversion, (ii) the alcohol inducible liver cytochrome P450 CYP2E1 (microsomal ethanol oxidizing system or MEOS) and, (iii) peroxisomal catalase. The ethanol to acetaldehyde conversion by ADH generates NADH (Berg et al., 2002). Oxidation of large amounts of alcohol therefore, leads to the accumulation of NADH, which inhibits lactate to pyruvate conversion and promotes the reverse reaction. Lactate to pyruvate conversion is an important means of entry of lactate into gluconeogenesis. As a result, lactic acidosis and hypoglycemia may occur during excessive alcohol consumption. NADH/NAD + ratio also allosterically regulates fatty acid β-oxidation which breaks down long chain acyl CoA to acetyl CoA for entry into TCA cycle (Berg et al., 2002). Since NADH is a product of fatty acid oxidation, an increase in NADH/NAD + ratio provides an allosteric feedback to the fatty acid β-oxidation pathway thereby decreasing the catabolism of fatty acids and leading to their intracellular accumulation. This leads to �tty liver.” NADH also inhibits two enzymes of the TCA cycle-isocitrate dehydrogenase and α-ketoglutarate dehydrogenase thereby decreasing the consumption of acetyl CoA by the TCA cycle and leading to increase in intra-hepatic acetyl CoA levels. The accumulation of acetyl CoA, in turn, leads to the increased production and release of ketone bodies exacerbating the acidosis already present in the blood due to increased levels of lactate (McGuire et al., 2006). This is known as alcoholic ketoacidosis, which creates a medical emergency. At very high levels of ethanol consumption, the metabolism of acetate becomes compromised leading to the accumulation of acetaldehyde within the hepatocytes. Acetaldehyde can modify the functional groups of many proteins and enzymes irreversibly forming acetaldehyde adducts which leads to a global dysfunction of hepatocytes and eventually, to cell death (Setshedi et al., 2010). The second major pathway for ethanol metabolism is via the inducible cytochrome P450 CYP1E2, also known as the microsomal ethanol oxidizing system (MEOS) (Lieber, 2004). This is located in the smooth endoplasmic reticulum of hepatocytes. In normal people with average to below average alcohol consumption, MEOS forms a minor pathway for intracellular alcohol metabolism (Lieber, 2004). However, it increases manifold upon chronic alcohol consumption. MEOS catalyzes a redox reaction converting molecular oxygen to water and NADPH to NADP (Figure 2). In the liver, glutathione plays an important role in maintaining the cellular redox status and participates in xenobiotic metabolism (Yuan and Kaplowitz, 2009). NADPH is essential in the regeneration of glutathione. The consumption of cellular NADPH leads to a decrease in regeneration of glutathione thereby leading to oxidative stress. This results in cell death and inflammation leading to alcoholic hepatitis which, in itself can be fatal (Morgan, 2007). Often, these processes occur hand in hand. Cellular depletion of glutathione has an additional consequence. Glutathione is required for the detoxification of several drugs including acetaminophen (van de Straat et al., 1987). In the hepatocytes, acetaminophen is modified to form a cytotoxic metabolite known as N-acetyl-p-benzoquinone imine (NAPQI) via CYP2E1 (van de Straat et al., 1987). Conjugation of NAPQI to glutathione results in an S-glutathione product that detoxifies the molecule and allows safe excretion in the urine. However, depletion of glutathione reserves allows unconjugated NAPQI to prevail in the cells which reacts with DNA and proteins to form adducts, thereby causing cytotoxicity and hepatocyte death (Macherey and Dansette, 2015). Long term alcohol use induces CYP2E1 and therefore facilitates rapid NAPQI formation when the liver encounters acetaminophen. At the same time, chronic alcohol abuse leads to low glutathione reserves. A combination of both these changes makes the liver highly susceptible to acetaminophen induced liver injury as well as injury due to other drugs or metabolites that go through the glutathione detoxification pathway. While drug overuse is, in itself a cause for liver injury, in a background of alcoholic liver disease, it can lead to massive liver damage. Damage to hepatocytes, either due to chronic alcohol abuse, exacerbated by drug use, activates the fibrogenic pathway leading to hepatic fibrosis, cirrhosis and hepatocellular carcinoma.

FIGURE 2. Alcohol metabolism in the liver. Three pathways are involved in alcohol metabolism and all of them converge on the oxidation of ethanol to acetaldehyde. Acetaldehyde is further converted to acetate by aldehyde dehydrogenase in the mitochondria. Acetate can be rapidly oxidized into CO2 and H2O by peripheral tissues, or can be diverted to the tri-carboxylic acid (TCA) pathway. The oxidation of ethanol to acetaldehyde by microsomal ethanol oxidation system (MEOS) occurs in the smooth endoplasmic reticulum and changes the NADPH/NADP ratio which in turn influences the regeneration of glutathione thereby increasing cellular oxidative stress. The alcohol dehydrogenase pathway is the major pathway and occurs in the cytosol, generating large amounts of NADH. NADH in turn inhibits TCA cycle enzymes and leads to accumulation of acetyl CoA and increase in ketone body generation and acidosis. NADH also inhibits fatty acid oxidation leading to accumulation of fats and causing �tty liver.” A combination of the above factors leads to tissue injury and activation of the fibrogenic pathway.

Drugs

Drugs induce hepatic fibrosis by causing drug-induced liver injury (DILI) that causes the initiation of fibrogenic tissue repair mechanisms. While the prevalence of DILI is lower as compared to other causes of liver injury, such as alcohol, hepatisis or steatosis, it can lead to life-threatening complications. DILI can be of two types: (i) intrinsic (due to injury caused by a known on-target drug) or (ii) idosyncratic (due to injury caused by an unknown factor and cannot be explained by known pharmacological elements e.g., herbal preparations of unknown compositions) (DiPaola and Fontana, 2018). Among intrinsic causes, acetaminophen induced DILI is the most common. As described above, acetaminophen overload combined with alcohol abuse can exacerbate the liver injury that can occur due to either alcohol or acetaminophen alone. A major function of the liver is detoxification of xenobiotic compounds that enter our circulation either through diet or through intravenous drug usage. Detoxification mechanisms in the liver mainly involve the cytochrome P450 family (CYP gene families CYP1, CYP2, CYP3) (McDonnell and Dang, 2013) (Figure 3). Cytochrome P450s are a group of heme proteins that are involved in the initial detoxification reactions of small molecules such as dietary and physiological metabolites, as well as drugs (Zanger and Schwab, 2013 Todorovic Vukotic et al., 2021). The expression of the CYP genes is influenced by several factors such as age, sex, promoter polymorphisms, cytokines, xenobiotic compounds and hormones, to name a few (Zanger and Schwab, 2013). Cytochrome P450 mainly carry out a monooxygenation reaction and carry oxidation of drugs/xenobiotic compounds. This can either convert the molecule into an inert or bioactive molecule.

FIGURE 3. Metabolism of drugs and other xenobiotics in the liver. Drug and xenobiotic metabolism occurs in two phases: (i) phase I is catalyzed by the cytochrome P450 family of monooxygenases which metabolize ingested small molecules to form inert or bioactive metabolic intermediates. (ii) These intermediates are further catalyzed in phase II reactions to form soluble polar compounds that can be further excreted through urine or bile. Accumulation of bioactive drug or xenobiotic intermediates can lead to the formation of protein or nucleic acid adducts causing autoimmune reaction, carcinogenesis or direct cellular injury.

Bioactive compounds can covalently modify intracellular proteins, leading to direct cellular injury, carcinogenesis or production of hapten-protein conjugates that can lead to antibody mediated cytotoxicity (Figure 3). Although the classical view of DILI is that drugs become hepatotoxic as a consequence of or defects in their metabolism, several factors may influence the final outcome of drug intake such as age, gender, comorbidities, intake of alcohol, other drugs or herbal preparations and polymorphisms of the CYP genes (Tarantino et al., 2009). The exact mechanism of DILI in specific cases depends on the nature of the molecule and its CYP-transformed metabolites. Drug metabolism can generate free radicals or electrophiles that can be chemically reactive. This can lead to the depletion of reduced glutathione, formation of protein, lipid or nucleic acid adducts and lipid peroxidation. Unless these metabolic intermediates are rapidly neutralized through phase II reactions, they can contribute to cellular stress and injury (Figure 3). They can also lead to modulation of signaling pathways, induce transcription factors, and alter gene expression profiles. In the liver, accumulation of large quantities of reactive drug metabolites can lead to hepatocellular injury, formation of protein adducts that can act as haptens and stimulate production of auto-antibodies or promote cellular transformation. Cellular injury then leads to induction of fibrogenic responses as described above.

Cholestasis

Cholestasis is emerging as a leading cause for liver injury and fibrosis. Cholestatic liver diseases can occur due to primary biliary cirrhosis and primary sclerosing cholangitis and involve injury to the intra- and extra-hepatic biliary tree (Penz-Österreicher et al., 2011). The pathogenesis of cholestasis is unclear but is believed to have an autoimmune component to it (Karlsen et al., 2017). I primary sclerosing cholangitis (PSC) several strictures appear around the bile ducts and cause bile duct injury. This activates the portal fibroblasts around the bile duct, which then differentiate into collagen secreting myofibroblasts (MFBs) similar to those derived from HSC activation. Recent studies have shown that fibrogenic MFBs have inherent heterogeneity and can be derived from both HSCs and portal fibroblasts (Karlsen et al., 2017). PSC has been shown to be associated with a varied manifestation of other diseases such as inflammatory bowel disease, cholangiocarcinoma, high IgG4 levels, autoimmune hepatitis and colonic neoplasia (Wee et al., 1985 Broomé et al., 1992 Perdigoto et al., 1992 Siqueira et al., 2002 Mendes et al., 2006 Berntsen et al., 2015). Due to its association with autoimmune responses, PSC is thought to involve a genetic predisposition which is activated by an as yet unidentified environmental trigger such as gut dysbiosis (Rossen et al., 2015). Although PSC is traditionally recognized as a rare disease, its incidence is on the rise due to an increase in unknown environmental triggers (Karlsen et al., 2017). Therefore, the pathogenesis of PSC is varied and injury to the bile ducts can occur through multiple pathways. However, the resultant bile duct injury leads to activation of the portal fibroblasts and consequent fibrogenesis.

Metabolic Disorders: Non-alcoholic Fatty Liver Disease and Non-alcoholic Steatohepatitis

The metabolic syndrome is a group of associated diseases that increase cardiovascular risk factors and are linked with obesity and type 2 diabetes mellitus (Rosselli et al., 2014). Liver manifestations of the metabolic syndrome result in NAFLD (Rosselli et al., 2014). NAFLD is attaining epidemic proportions all over the world. The global prevalence of NAFLD is about 25% and in obese individuals or in the presence of type 2 diabetes mellitus, it increases to about 60% (Younossi et al., 2016). NAFLD is linked to increased risk of hepatic fibrosis, hepatocellular carcinoma and mortality due to cardiovascular disease. The more severe subtype of NAFLD is NASH, which has a global prevalence of about 2𠄶% and which is associated with severe hepatic inflammation, fibrosis leading to cirrhosis and HCC as well as end stage liver disease (Younossi et al., 2016 Younossi et al., 2019). Recently reported trends in the incidence of NAFLD over time suggest that NAFLD will become the leading cause of end stage liver disease in the decades to come. Emerging data from India, suggests that the national prevalence of NAFLD is about 9�% in the general population and about 53% in obese individuals (Kalra et al., 2013 Duseja, 2010). Therefore, NAFLD is a global clinical concern. The molecular pathogenesis of NAFLD is complex. However, all pathways in NAFLD converge at the conversion of HSCs into profibrogenic MFBs, through the activation of the TGF-β pathway (Buzzetti et al., 2016) (Figure 4). TGF-β is a pleiotropic cytokine and is involved in various cellular processes like cell proliferation, survival, angiogenesis, differentiation, and the wound healing response (Mantel and Schmidt-Weber, 2011). TGF-β binds to the TGF-β receptor type II, which in turn phosphorylates TGF-β receptor type I thereby recruiting and phosphorylating the intracellular signal transducer proteins belonging to the SMAD superfamily. The SMAD superfamily is composed of intracellular signal transducers that specifically respond to the TGF-β receptor modulation. Phosphorylated SMADs subsequently translocate into the nucleus and control the expression of the TGF-β regulated target genes (Mantel and Schmidt-Weber, 2011) (Figure 4). The activation of HSCs via TGF-β plays a major role in the advanced NAFLD in both experimental animal models, as well as in human liver injury (Yang et al., 2014). In addition to HSC activation, TGF-β signaling followed by SMAD phosphorylation is known to cause hepatocyte death driving progression to NASH (Yang et al., 2017). Hepatocyte death via TGF-β signaling is accompanied by generation of reactive oxygen species as well as lipid accumulation in hepatocytes (Yang et al., 2017). Activation of the TGF-β pathway also leads to HSC differentiation into MFBs leading to formation of fibrillar collagen and exacerbating the combined effects of hepatocyte injury, fibrosis and inflammation, leading to NASH (Yang et al., 2014). While the TGF-β pathway is central to liver fibrogenesis, emerging proteome and transcriptome studies have suggested additional regulatory genes and pathways. These studies have been carried out in animal models of NAFLD or NASH and human liver biopsies obtained from patients. Comparative transcriptomic studies between mouse models of NAFLD and human liver biopsies obtained from NASH patients reveal major differences between human NASH liver transcriptome and mouse NAFLD transcriptomes even at severe stages (Teufel et al., 2016). This suggests major pathophysiological differences between human disease and animal models of the disease and the need to design studies in humanized models of disease or in liver organoid systems (Suppli et al., 2019). A meta-analysis of transcriptomic studies carried out with human liver biopsies suggests the upregulation of several genes within the lipogenesis pathway (Table 2). Interestingly, genes such as ACACA (Acetyl carboxylase 1) which catalyzes the synthesis of malonyl CoA from acetyl CoA, the rate limiting step in fatty acid biosynthesis and ACACB (Acetyl carboxylase 2) which regulates fatty acid oxidation, are associated with NAFLD liver tissue demonstrating the association of lipogenic functions within the tissue with active disease (Table 2) (Widmer et al., 1996 Locke et al., 2008). In several cases, NAFLD has been shown to be linked to progression toward hepatocellular carcinoma. Recent studies have led to the understanding that the evolution of NAFLD to NASH and HCC is multifactorial and involves the innate immune system to a great extent (Chen et al., 2019). Lipid accumulation and mitochondrial dysfunction have been identified as critical components of the pathways leading to NAFLD (Margini and Dufour, 2016). Many new genes and pathways have been implicated at every stage of NAFLD to NASH to HCC progression (Figure 5). Regulation in PPAR-γ, Insulin and p53-mediated signaling have been implicated in NAFLD development, whereas signatures of inflammatory signaling such as Toll-like receptor (TLR) and Nucleotide-binding, oligomerization domain (NOD) protein signaling pathways, in addition to pathways reflecting mitochondrial dysfunction characterize NASH (Figure 5) (Ryaboshapkina and Hammar, 2017).

FIGURE 4. The TGF-β signaling pathway in hepatic stellate cells. TGF-β binds to type II TGF-β receptor leading to receptor dimerization i.e. recruitment of the type I TGF-β receptor. The kinase domain of Type II TGF-β receptor then phosphorylates the Ser residue of type I TGF-β receptor. The phosphorylated receptor now recruits R-SMAD, which binds to receptor through its N-terminal region and gets phosphorylated by the Type II receptor. The C-terminal of R-SMAD has a DNA binding domain (DBD) that can act as a transcription factor. The co-SMAD now binds to R-SMAD and β-Importin binds to the dimer forming an oligomeric complex that guides the R-SMAD and Co-SMAD into the nucleus. The dimer enters the nucleus and the DBD of SMAD now acts as transcription factor that can transcribe target genes.

TABLE 2. Summary of pathways from transcriptomics analyses implicated in NAFLD

FIGURE 5. Summary of pathways that may be important in the progression of NAFLD to NASH. The transition from healthy to NAFLD involves the activation of peroxisome proliferator activated receptor signaling, insulin signaling and p53 signaling whereas the switch to NASH involves activation of inflammatory pathways such as TLR and NOD like receptor mediated signaling, generation of intracellular oxidative stress and mitochondrial signaling.

There are only a limited number of proteomics studies in human NAFLD. A comparative quantitative proteomics study between NAFLD and Metabolic Healthy Obese (MHO) individuals was carried out using liver tissue obtained during surgery (Yuan et al., 2020). This study demonstrated the relevance of PPAR signaling, ECM-receptor interaction and oxidative phosphorylation in resisting NAFLD. Proteins upregulated in NAFLD were involved in organization of the ECM, and proteins downregulated in NAFLD were involved in redox processes. A schematic of pathways relevant in NAFLD progression, as gleaned from various “omics” approaches is summarized in Figure 5.

Viral Hepatitis

In older children, autoimmune hepatitis and viral hepatitis are the leading causes of liver fibrosis followed by cirrhosis. Viral hepatitis can be caused by any one of the five viruses: Hepatitis A, B, C, D, and E of which A and E are usually acute, while B, C, and D are chronic (Zuckerman 1996). All hepatitis viruses are infectious, while alcohol, other toxins and autoimmune mediated hepatitis are usually non-infectious. HBV and HCV lead to hepatic inflammation (Gutierrez-Reyes et al., 2007). Several viral components are known to induce cellular damage in hepatocytes and liver constituents. For instance, the HCV core protein in chronic infections is known to interact with the TNF-α receptors (TNFRSF1A) which subsequently induces a pro-apoptotic signal in hepatocytes (Zhu et al., 1998). Polymorphisms in TNFRSF1A have been shown to be associated with HCV outcomes (Yue et al., 2021). The HCV core protein is also known to interact with ApoA1and ApoA2, thereby interfering with the assembly and secretion of very low density lipoprotein (VLDL), thus cause the accumulation of triglycerides in the liver through the interaction of both viral and metabolic factors and subsequent cell death (Gutierrez-Reyes et al., 2007). Furthermore, the viral core protein as well as the HCV non-structural protein 5A (NS5A) are known to cause mitochondrial ROS production and cellular stress leading to cell death (Bataller et al., 2004). Interestingly, HCV and NAFLD can co-exist and have been shown to have a more rapid disease progression than either disease alone (Patel and Harrison, 2012 Dyson et al., 2014). About 50% of HCV patients have steatosis with significant fibrosis and the HCV genotype 3 is mainly associated with the steatosis, however the exact mechanism leading to steatosis in HCV patients is not fully elucidated.

The association of hepatitis B virus (HBV) infection with NAFLD however, appears to be controversial. Some studies suggest that HBV infection is protective against steatosis, insulin resistance and metabolic syndrome (Morales et al., 2017 Xiong et al., 2017) while others suggest that chronic HBV infections can co-exist with NAFLD and can actively worsen the disease (Zhang et al., 2020). The presence of Hepatitis B protein X (HBx) in the cells has been shown to increase the production of reactive oxygen species (ROS) increasing the formation of lipids in the cells and therefore HBx could be a risk factor for the development of NAFLD (Wang et al., 2019). Therefore, an alternative mechanism by which viral hepatitis can induce fibrosis is through their ability to cause NAFLD.

Parasitic Infections

The liver is capable of hosting a wide range of parasites which vary in host cell requirement (extra or intracellular), sizes (unicellular to multicellular) and potential harm to the host cells or organs (Dunn, 2011). Parasites which have co-evolved with humans through centuries, such as the malaria parasites cause minimal injury to the host liver and move on to the blood with ease (Acharya et al., 2017). However, some parasites can cause injury to the cells of the liver and trigger the activation of the fibrogenic pathway. Some of these parasites are discussed below:

Leishmania is an intracellular protozoan parasite that infects the reticuloendothelial system (RES) in the body, i.e., circulating monocytes as well as tissue-resident macrophages (Magill et al., 1993). Leishmaniasis is transmitted by the bite of infected sandflies (Dunn, 2011). Visceral leishmaniasis (kala-azar) involves the RES infection of the visceral organs like the liver, spleen, bone marrow and other lymph nodes. Kupffer cells, the tissue resident macrophages of the liver, take up the amastigote stage of Leishmania from circulating infected reticuloendothelial cells. The parasite then replicates within the macrophages and activates the host inflammatory and Th1 and Th17 mediated adaptive immune responses in immunocompetent individuals (Pitta et al., 2009). Leishmaniasis is typically associated with increased liver fibrosis (Melo et al., 2009). Leishmania parasites have been shown to use host ECM components such as fibronectin and laminin to access Kupffer cells for infection (Wyler et al., 1985 Wyler, 1987 Vannier-Santos et al., 1992 Figueira et al., 2015). Visceral leishmaniasis has been frequently studied in dogs as a model system. These studies suggest that dogs infected with Leishmania have a significantly higher level of collagen and fibronectin deposition (Melo et al., 2009). Intra-lobular collagen deposition, appearance of MFBs and effacement of the space of Disse are characteristic of overt Leishmania infection in slightly or severely immunocompromized individuals (Dunn, 2011). Leishmania associated fibrosis is completely reversible once the parasitic infection has been treated. However, since overt disease and severe fibrosis usually occurs in immunocompromized individuals such as those infected with HIV, relapses typically occur once treatment ceases (Dunn, 2011).

Schistosomiasis is caused by Schistosoma species which are a group of blood flukes belonging to the trematode or flatworm family (Andrade, 2009). It is prevalent mainly in the tropical and sub-tropical regions of the world. Schistosoma use freshwater snails as intermediate hosts, which release eggs into water bodies which then come into contact with humans and infect them (WHO. World Health Organization, 2021). Schistosomiasis can be intestinal (wherein the liver is involved) or urogenital. Intestinal schistosomiasis can be caused by many different species such as Schistosoma mansoni (found in Africa, Middle East, Caribbean, Brazil, Venezuela and Suriname), S. japonicum (found in China, Indonesia and the Philippines), S. mekongi (Cambodia, and the Lao People’s Democratic Republic), S. guineensis and S. intercalatum (found in the rain forests of central Africa). Urogenital infection is caused by S. hematobium (found in Africa, the Middle East and Corsica in France) (WHO. World Health Organization, 2021).

Schistosoma mansoni are associated with liver fibrosis (Andrade, 2009). Schistosome eggs are carried to the liver by the portal vein and stop in the pre-sinusoidal vessels (Andrade, 2004). The development of severe schistosomiasis is thought to have two components- (a) a major determinant is the high worm load and, (b) a secondary determinant is thought to be genetic predisposition. At low to moderate worm loads, many patients are asymptomatic and the lesions heal automatically due to the appropriate activation of T-cell mediated host immune responses (Andrade, 2004). A high worm load is also associated with damage to the portal vein and appearance of MFBs and collagen deposition around the portal stem leading to portal fibrosis called pipestem fibrosis (Andrade et al., 1999). Since all infected individuals do not develop severe liver disease, or liver fibrosis, schistosomiasis linked liver fibrosis development is also thought to have a genetic component. A metaanalysis of genetic polymorphisms associated with severe liver disease and fibrosis in schistosomiasis reveals several genetic polymorphisms (Dessein et al., 2020). Several polymorphisms in genes related to the TGF-β pathway were found to be associated with severe fibrosis in schistosomiasis e.g., TGFBR1, TGFBR2, ACVRL1, SMAD3 and SMAD9 (Dassein et al., 2020). Polymorphisms in the connective tissue growth factor (CTGF) as well as the IL-22 pathway were also observed. In addition, several associations have been reported between severe hepatic fibrosis during Schistosomiasis and genes encoding for IL-13, TNF-α, MAPKAP1, ST2, IL-10, M1CA, HLADRB1, IL-4, ECP, and IFN-γ, have been reported from various studies (Hirayama et al., 1998 Chevillard et al., 2003 Eriksson et al., 2007 Gong et al., 2012 Silva et al., 2014 Zhu et al., 2014 Long et al., 2015 Oliveira et al., 2015 Long et al., 2017 Silva et al., 2017). These observations suggest that while infectious agents such as schistosoma can drive hepatic fibrosis by mediating tissue damage, genetic predispositions to TGF-β pathway activation or a specific inflammatory response may make the hepatic environment conducive to fibrosis in the presence of an infectious agent.

Fasciola hepatica, also known as the liver fluke is also a trematode parasite that infects humans (Machicado et al., 2016). Fascioliasis is a neglected tropical disease. A recent meta-analysis has found an association of Fasciola infections with liver fibrosis, cirrhosis and hepatocellular carcinoma (Machicado et al., 2016). The mechanism of fibrosis development is thought to be due to the activation of HSCs by parasite encoded cathepsins (Marcos et al., 2011). As with schistosoma, worm-load seems to be an important determinant of fibrosis. However, there are a very limited number of studies available on the pathogenesis, molecular epidemiology and prevalence of fascioliasis with liver fibrosis and this area needs further investigation.

Cryptogenic Causes

Cryptogenic causes of liver fibrosis are cases with unknown causes but it is believed that a high proportion of the cryptogenic liver fibrosis cases could be linked to NAFLD or NASH (Caldwell, 2010 Patel et al., 2020). Other causes could include occult alcohol intake, viral hepatitis, autoimmune hepatitis, biliary disease, vascular disease, celiac disease, mitochondriopathies, systemic lupus erythematosus, Alstrom syndrome, Apolipoprotein B with LDL cholesterol, and genetic disorders such as short telomere syndrome, keratin 18 mutations and glutathione-S-transferase mutations (Caldwell, 2010 Patel et al., 2020).


How Your Body Controls Heart Rate and Blood Pressure - How the Heart Works

How fast and hard your heart beats is controlled by signals from your body’s nervous system, as well as by hormones from your endocrine system. These signals and hormones allow you to adapt to changes in the amount of oxygen and nutrients your body needs. For example, when you exercise, your muscles need more oxygen, so your heart beats faster. When you sleep, your heart beats slower.

Your blood pressure is the force of the blood pushing against the walls of your arteries as the heart pumps blood. It is made up of two numbers: systolic and diastolic .

  • Systolic pressure is the pressure when the ventricles pump blood out of the heart. The pressure on your arteries is highest during this time.
  • Diastolic pressure is the pressure between beats, when the heart is filling with blood. The pressure on your arteries is lowest during this time.

For most adults, healthy blood pressure is usually less than 120 over 80, which is written as your systolic pressure number over your diastolic pressure number.

High blood pressure is what happens when blood flows through blood vessels at higher-than-normal pressures.

Your heart rate is controlled by the autonomic nervous system , also called the involuntary nervous system because it happens without your thinking about it. There are two opposing effects of the autonomic nervous system on your heart.

  • The parasympathetic systemtells your heart to beat slower during rest.
  • The sympathetic systemtells your heart to beat faster. This is the “fight-or-flight response.” When activated, it releases a chemical signal called norepinephrine that causes the heart to beat faster. Norepinephrine also signals the muscle in your heart to beat harder.

In a healthy person, the heart rate reflects a balance between these two systems.

A number of hormones from the endocrine system affect your heart and blood vessels.

Low levels of the hormone epinephrine, also called adrenaline, cause blood vessels to relax and widen. High levels of this same hormone, along with the hormone norepinephrine, cause the blood vessels to narrow and the heart rate to rise, increasing blood pressure.

Hormones also control how much water and salt your kidneys remove from your blood to excrete as urine. When your blood volume is low, such as when you are losing blood, certain hormones prevent water loss to help maintain your blood volume and blood pressure. The hormones also cause the blood vessels to narrow to maintain blood pressure. These hormones include:

  • The renin-angiotensin-aldosterone system, which can also cause the muscle cells in the heart to grow larger so they can pump harder.
  • Vasopressin, released from the pituitary gland.

Some hormones cause the kidneys to remove more water and salt from the blood. The decreased blood volume and salt cause your blood vessels to relax and lower your blood pressure. Atrial natriuretic peptide is a hormone made and released by heart cells when the pressure inside the atria is elevated.

The thyroid gland releases thyroid hormones that increase the heart rate. Problems with your thyroid gland can lead to heart problems such as an irregular heartbeat. Too much thyroid hormone can cause the heart to beat faster. Too little thyroid hormone can slow your heart rate.


Leaves

Leaves are the main sites for photosynthesis: the process by which plants synthesize food. Most leaves are usually green, due to the presence of chlorophyll in the leaf cells. However, some leaves may have different colors, caused by other plant pigments that mask the green chlorophyll.

The thickness, shape, and size of leaves are adapted to the environment. Each variation helps a plant species maximize its chances of survival in a particular habitat. Usually, the leaves of plants growing in tropical rainforests have larger surface areas than those of plants growing in deserts or very cold conditions, which are likely to have a smaller surface area to minimize water loss.

Structure of a Typical Leaf

Figure 13. Deceptively simple in appearance, a leaf is a highly efficient structure.

Each leaf typically has a leaf blade called the lamina, which is also the widest part of the leaf. Some leaves are attached to the plant stem by a petiole. Leaves that do not have a petiole and are directly attached to the plant stem are called sessile leaves. Small green appendages usually found at the base of the petiole are known as stipules. Most leaves have a midrib, which travels the length of the leaf and branches to each side to produce veins of vascular tissue. The edge of the leaf is called the margin. Figure 13 shows the structure of a typical eudicot leaf.

Within each leaf, the vascular tissue forms veins. The arrangement of veins in a leaf is called the venation pattern. Monocots and dicots differ in their patterns of venation (Figure 14). Monocots have parallel venation the veins run in straight lines across the length of the leaf without converging at a point. In dicots, however, the veins of the leaf have a net-like appearance, forming a pattern known as reticulate venation. One extant plant, the Ginkgo biloba, has dichotomous venation where the veins fork.

Figure 14. (a) Tulip (Tulipa), a monocot, has leaves with parallel venation. The netlike venation in this (b) linden (Tilia cordata) leaf distinguishes it as a dicot. The (c) Ginkgo biloba tree has dichotomous venation. (credit a photo: modification of work by “Drewboy64”/Wikimedia Commons credit b photo: modification of work by Roger Griffith credit c photo: modification of work by “geishaboy500″/Flickr credit abc illustrations: modification of work by Agnieszka Kwiecień)

Leaf Arrangement

The arrangement of leaves on a stem is known as phyllotaxy. The number and placement of a plant’s leaves will vary depending on the species, with each species exhibiting a characteristic leaf arrangement. Leaves are classified as either alternate, spiral, or opposite. Plants that have only one leaf per node have leaves that are said to be either alternate—meaning the leaves alternate on each side of the stem in a flat plane—or spiral, meaning the leaves are arrayed in a spiral along the stem. In an opposite leaf arrangement, two leaves arise at the same point, with the leaves connecting opposite each other along the branch. If there are three or more leaves connected at a node, the leaf arrangement is classified as whorled.

Leaf Form

Leaves may be simple or compound (Figure 15). In a simple leaf, the blade is either completely undivided—as in the banana leaf—or it has lobes, but the separation does not reach the midrib, as in the maple leaf. In a compound leaf, the leaf blade is completely divided, forming leaflets, as in the locust tree. Each leaflet may have its own stalk, but is attached to the rachis. A palmately compound leaf resembles the palm of a hand, with leaflets radiating outwards from one point Examples include the leaves of poison ivy, the buckeye tree, or the familiar houseplant Schefflera sp. (common name “umbrella plant”). Pinnately compound leaves take their name from their feather-like appearance the leaflets are arranged along the midrib, as in rose leaves (Rosa sp.), or the leaves of hickory, pecan, ash, or walnut trees.

Figure 15. Leaves may be simple or compound. In simple leaves, the lamina is continuous. The (a) banana plant (Musa sp.) has simple leaves. In compound leaves, the lamina is separated into leaflets. Compound leaves may be palmate or pinnate. In (b) palmately compound leaves, such as those of the horse chestnut (Aesculus hippocastanum), the leaflets branch from the petiole. In (c) pinnately compound leaves, the leaflets branch from the midrib, as on a scrub hickory (Carya floridana). The (d) honey locust has double compound leaves, in which leaflets branch from the veins. (credit a: modification of work by “BazzaDaRambler”/Flickr credit b: modification of work by Roberto Verzo credit c: modification of work by Eric Dion credit d: modification of work by Valerie Lykes)

Leaf Structure and Function

The outermost layer of the leaf is the epidermis it is present on both sides of the leaf and is called the upper and lower epidermis, respectively. Botanists call the upper side the adaxial surface (or adaxis) and the lower side the abaxial surface (or abaxis). The epidermis helps in the regulation of gas exchange. It contains stomata (Figure 16): openings through which the exchange of gases takes place. Two guard cells surround each stoma, regulating its opening and closing.

Figure 16. Visualized at 500x with a scanning electron microscope, several stomata are clearly visible on (a) the surface of this sumac (Rhus glabra) leaf. At 5,000x magnification, the guard cells of (b) a single stoma from lyre-leaved sand cress (Arabidopsis lyrata) have the appearance of lips that surround the opening. In this (c) light micrograph cross-section of an A. lyrata leaf, the guard cell pair is visible along with the large, sub-stomatal air space in the leaf. (credit: modification of work by Robert R. Wise part c scale-bar data from Matt Russell)

The epidermis is usually one cell layer thick however, in plants that grow in very hot or very cold conditions, the epidermis may be several layers thick to protect against excessive water loss from transpiration. A waxy layer known as the cuticle covers the leaves of all plant species. The cuticle reduces the rate of water loss from the leaf surface. Other leaves may have small hairs (trichomes) on the leaf surface. Trichomes help to deter herbivory by restricting insect movements, or by storing toxic or bad-tasting compounds they can also reduce the rate of transpiration by blocking air flow across the leaf surface (Figure 17).

Figure 17. Trichomes give leaves a fuzzy appearance as in this (a) sundew (Drosera sp.). Leaf trichomes include (b) branched trichomes on the leaf of Arabidopsis lyrata and (c) multibranched trichomes on a mature Quercus marilandica leaf. (credit a: John Freeland credit b, c: modification of work by Robert R. Wise scale-bar data from Matt Russell)

Below the epidermis of dicot leaves are layers of cells known as the mesophyll, or “middle leaf.” The mesophyll of most leaves typically contains two arrangements of parenchyma cells: the palisade parenchyma and spongy parenchyma (Figure 18). The palisade parenchyma (also called the palisade mesophyll) has column-shaped, tightly packed cells, and may be present in one, two, or three layers. Below the palisade parenchyma are loosely arranged cells of an irregular shape. These are the cells of the spongy parenchyma (or spongy mesophyll). The air space found between the spongy parenchyma cells allows gaseous exchange between the leaf and the outside atmosphere through the stomata. In aquatic plants, the intercellular spaces in the spongy parenchyma help the leaf float. Both layers of the mesophyll contain many chloroplasts. Guard cells are the only epidermal cells to contain chloroplasts.

In the leaf drawing (Figure 18a), the central mesophyll is sandwiched between an upper and lower epidermis. The mesophyll has two layers: an upper palisade layer comprised of tightly packed, columnar cells, and a lower spongy layer, comprised of loosely packed, irregularly shaped cells. Stomata on the leaf underside allow gas exchange. A waxy cuticle covers all aerial surfaces of land plants to minimize water loss. These leaf layers are clearly visible in the scanning electron micrograph (Figure 18b). The numerous small bumps in the palisade parenchyma cells are chloroplasts. Chloroplasts are also present in the spongy parenchyma, but are not as obvious. The bumps protruding from the lower surface of the leave are glandular trichomes, which differ in structure from the stalked trichomes in Figure 17.

Figure 18. (a) Leaf drawing (b) Scanning electron micrograph of a leaf. (credit b: modification of work by Robert R. Wise)

Figure 19. This scanning electron micrograph shows xylem and phloem in the leaf vascular bundle from the lyre-leaved sand cress (Arabidopsis lyrata). (credit: modification of work by Robert R. Wise scale-bar data from Matt Russell)

Like the stem, the leaf contains vascular bundles composed of xylem and phloem (Figure 19). The xylem consists of tracheids and vessels, which transport water and minerals to the leaves. The phloem transports the photosynthetic products from the leaf to the other parts of the plant. A single vascular bundle, no matter how large or small, always contains both xylem and phloem tissues.

Leaf Adaptations

Coniferous plant species that thrive in cold environments, like spruce, fir, and pine, have leaves that are reduced in size and needle-like in appearance. These needle-like leaves have sunken stomata and a smaller surface area: two attributes that aid in reducing water loss. In hot climates, plants such as cacti have leaves that are reduced to spines, which in combination with their succulent stems, help to conserve water. Many aquatic plants have leaves with wide lamina that can float on the surface of the water, and a thick waxy cuticle on the leaf surface that repels water.

Watch “The Pale Pitcher Plant” episode of the video series Plants Are Cool, Too, a Botanical Society of America video about a carnivorous plant species found in Louisiana.


In Summary: Leaves

Leaves are the main site of photosynthesis. A typical leaf consists of a lamina (the broad part of the leaf, also called the blade) and a petiole (the stalk that attaches the leaf to a stem). The arrangement of leaves on a stem, known as phyllotaxy, enables maximum exposure to sunlight. Each plant species has a characteristic leaf arrangement and form. The pattern of leaf arrangement may be alternate, opposite, or spiral, while leaf form may be simple or compound. Leaf tissue consists of the epidermis, which forms the outermost cell layer, and mesophyll and vascular tissue, which make up the inner portion of the leaf. In some plant species, leaf form is modified to form structures such as tendrils, spines, bud scales, and needles.


Contents

The many types of pancreatic cancer can be divided into two general groups. The vast majority of cases (about 95%) occur in the part of the pancreas that produces digestive enzymes, known as the exocrine component. Several subtypes of exocrine pancreatic cancers are described, but their diagnosis and treatment have much in common. The small minority of cancers that arise in the hormone-producing (endocrine) tissue of the pancreas have different clinical characteristics and are called pancreatic neuroendocrine tumors, sometimes abbreviated as "PanNETs". Both groups occur mainly (but not exclusively) in people over 40, and are slightly more common in men, but some rare subtypes mainly occur in women or children. [18] [19]

Exocrine cancers Edit

The exocrine group is dominated by pancreatic adenocarcinoma (variations of this name may add "invasive" and "ductal"), which is by far the most common type, representing about 85% of all pancreatic cancers. [2] Nearly all these start in the ducts of the pancreas, as pancreatic ductal adenocarcinoma (PDAC). [20] This is despite the fact that the tissue from which it arises – the pancreatic ductal epithelium – represents less than 10% of the pancreas by cell volume, because it constitutes only the ducts (an extensive but capillary-like duct-system fanning out) within the pancreas. [21] This cancer originates in the ducts that carry secretions (such as enzymes and bicarbonate) away from the pancreas. About 60–70% of adenocarcinomas occur in the head of the pancreas. [2]

The next-most common type, acinar cell carcinoma of the pancreas, arises in the clusters of cells that produce these enzymes, and represents 5% of exocrine pancreas cancers. [22] Like the 'functioning' endocrine cancers described below, acinar cell carcinomas may cause over-production of certain molecules, in this case digestive enzymes, which may cause symptoms such as skin rashes and joint pain.

Cystadenocarcinomas account for 1% of pancreatic cancers, and they have a better prognosis than the other exocrine types. [22]

Pancreatoblastoma is a rare form, mostly occurring in childhood, and with a relatively good prognosis. Other exocrine cancers include adenosquamous carcinomas, signet ring cell carcinomas, hepatoid carcinomas, colloid carcinomas, undifferentiated carcinomas, and undifferentiated carcinomas with osteoclast-like giant cells. Solid pseudopapillary tumor is a rare low-grade neoplasm that mainly affects younger women, and generally has a very good prognosis. [2] [23]

Pancreatic mucinous cystic neoplasms are a broad group of pancreas tumors that have varying malignant potential. They are being detected at a greatly increased rate as CT scans become more powerful and common, and discussion continues as how best to assess and treat them, given that many are benign. [24]

Neuroendocrine Edit

The small minority of tumors that arise elsewhere in the pancreas are mainly pancreatic neuroendocrine tumors (PanNETs). [25] Neuroendocrine tumors (NETs) are a diverse group of benign or malignant tumors that arise from the body's neuroendocrine cells, which are responsible for integrating the nervous and endocrine systems. NETs can start in most organs of the body, including the pancreas, where the various malignant types are all considered to be rare. PanNETs are grouped into 'functioning' and 'nonfunctioning' types, depending on the degree to which they produce hormones. The functioning types secrete hormones such as insulin, gastrin, and glucagon into the bloodstream, often in large quantities, giving rise to serious symptoms such as low blood sugar, but also favoring relatively early detection. The most common functioning PanNETs are insulinomas and gastrinomas, named after the hormones they secrete. The nonfunctioning types do not secrete hormones in a sufficient quantity to give rise to overt clinical symptoms, so nonfunctioning PanNETs are often diagnosed only after the cancer has spread to other parts of the body. [26]

As with other neuroendocrine tumors, the history of the terminology and classification of PanNETs is complex. [25] PanNETs are sometimes called "islet cell cancers", [27] though they are now known to not actually arise from islet cells as previously thought. [26]

Since pancreatic cancer usually does not cause recognizable symptoms in its early stages, the disease is typically not diagnosed until it has spread beyond the pancreas itself. [4] This is one of the main reasons for the generally poor survival rates. Exceptions to this are the functioning PanNETs, where over-production of various active hormones can give rise to symptoms (which depend on the type of hormone). [28]

Bearing in mind that the disease is rarely diagnosed before the age of 40, common symptoms of pancreatic adenocarcinoma occurring before diagnosis include:

    or back, often spreading from around the stomach to the back. The location of the pain can indicate the part of the pancreas where a tumor is located. The pain may be worse at night and may increase over time to become severe and unremitting. [22] It may be slightly relieved by bending forward. In the UK, about half of new cases of pancreatic cancer are diagnosed following a visit to a hospital emergency department for pain or jaundice. In up to two-thirds of people, abdominal pain is the main symptom, for 46% of the total accompanied by jaundice, with 13% having jaundice without pain. [12] , a yellow tint to the whites of the eyes or skin, with or without pain, and possibly in combination with darkened urine, results when a cancer in the head of the pancreas obstructs the common bile duct as it runs through the pancreas. [29] , either from loss of appetite, or loss of exocrine function resulting in poor digestion. [12]
  • The tumor may compress neighboring organs, disrupting digestive processes and making it difficult for the stomach to empty, which may cause nausea and a feeling of fullness. The undigested fat leads to foul-smelling, fatty feces that are difficult to flush away. [12]Constipation is also common. [30]
  • At least 50% of people with pancreatic adenocarcinoma have diabetes at the time of diagnosis. [2] While long-standing diabetes is a known risk factor for pancreatic cancer (see Risk factors), the cancer can itself cause diabetes, in which case recent onset of diabetes could be considered an early sign of the disease. [31] People over 50 who develop diabetes have eight times the usual risk of developing pancreatic adenocarcinoma within three years, after which the relative risk declines. [12]

Other findings Edit

    —in which blood clots form spontaneously in the portal blood vessels (portal vein thrombosis), the deep veins of the extremities (deep vein thrombosis), or the superficial veins (superficial vein thrombosis) anywhere on the body—may be associated with pancreatic cancer, and is found in about 10% of cases. [3] has been reported in association with pancreatic cancer in some 10–20% of cases, and can be a hindrance to optimal management. The depression sometimes appears before the diagnosis of cancer, suggesting that it may be brought on by the biology of the disease. [3]

Other common manifestations of the disease include weakness and tiring easily, dry mouth, sleep problems, and a palpable abdominal mass. [30]

Symptoms of spread Edit

The spread of pancreatic cancer to other organs (metastasis) may also cause symptoms. Typically, pancreatic adenocarcinoma first spreads to nearby lymph nodes, and later to the liver or to the peritoneal cavity, large intestine, or lungs. [3] Uncommonly, it spreads to the bones or brain. [32]

Cancers in the pancreas may also be secondary cancers that have spread from other parts of the body. This is uncommon, found in only about 2% of cases of pancreatic cancer. Kidney cancer is by far the most common cancer to spread to the pancreas, followed by colorectal cancer, and then cancers of the skin, breast, and lung. Surgery may be performed on the pancreas in such cases, whether in hope of a cure or to alleviate symptoms. [33]

Risk factors for pancreatic adenocarcinoma include: [2] [10] [12] [34] [35]

  • Age, sex, and ethnicity – the risk of developing pancreatic cancer increases with age. Most cases occur after age 65, [10] while cases before age 40 are uncommon. The disease is slightly more common in men than in women. [10] In the United States, it is over 1.5 times more common in African Americans, though incidence in Africa is low. [10] is the best-established avoidable risk factor for pancreatic cancer, approximately doubling risk among long-term smokers, the risk increasing with the number of cigarettes smoked and the years of smoking. The risk declines slowly after smoking cessation, taking some 20 years to return to almost that of nonsmokers. [36] – a body mass index greater than 35 increases relative risk by about half. [12][37]
  • Family history – 5–10% of pancreatic cancer cases have an inherited component, where people have a family history of pancreatic cancer. [2][38] The risk escalates greatly if more than one first-degree relative had the disease, and more modestly if they developed it before the age of 50. [4] Most of the genes involved have not been identified. [2][39]Hereditary pancreatitis gives a greatly increased lifetime risk of pancreatic cancer of 30–40% to the age of 70. [3] Screening for early pancreatic cancer may be offered to individuals with hereditary pancreatitis on a research basis. [40] Some people may choose to have their pancreas surgically removed to prevent cancer from developing in the future. [3]
    appears to almost triple risk, and as with diabetes, new-onset pancreatitis may be a symptom of a tumor. [3] The risk of pancreatic cancer in individuals with familial pancreatitis is particularly high. [3][39] is a risk factor for pancreatic cancer and (as noted in the Signs and symptoms section) new-onset diabetes may also be an early sign of the disease. People who have been diagnosed with type 2 diabetes for longer than 10 years may have a 50% increased risk, as compared with individuals without diabetes. [3]
  • Specific types of food (as distinct from obesity) have not been clearly shown to increase the risk of pancreatic cancer. [2][41] Dietary factors for which some evidence shows slightly increased risk include processed meat, red meat, and meat cooked at very high temperatures (e.g. by frying, broiling, or grilling). [41][42]

Alcohol Edit

Drinking alcohol excessively is a major cause of chronic pancreatitis, which in turn predisposes to pancreatic cancer, but considerable research has failed to firmly establish alcohol consumption as a direct risk factor for pancreatic cancer. Overall, the association is consistently weak and the majority of studies have found no association, with smoking a strong confounding factor. The evidence is stronger for a link with heavy drinking, of at least six drinks per day. [3] [43]

Precancer Edit

Exocrine cancers are thought to arise from several types of precancerous lesions within the pancreas, but these lesions do not always progress to cancer, and the increased numbers detected as a byproduct of the increasing use of CT scans for other reasons are not all treated. [3] Apart from pancreatic serous cystadenomas, which are almost always benign, four types of precancerous lesion are recognized.

The first is pancreatic intraepithelial neoplasia. These lesions are microscopic abnormalities in the pancreas and are often found in autopsies of people with no diagnosed cancer. These lesions may progress from low to high grade and then to a tumor. More than 90% of cases at all grades carry a faulty KRAS gene, while in grades 2 and 3, damage to three further genes – CDKN2A (p16), p53, and SMAD4 – are increasingly often found. [2]

A second type is the intraductal papillary mucinous neoplasm (IPMN). These are macroscopic lesions, which are found in about 2% of all adults. This rate rises to about 10% by age 70. These lesions have about a 25% risk of developing into invasive cancer. They may have KRAS gene mutations (40–65% of cases) and in the GNAS Gs alpha subunit and RNF43, affecting the Wnt signaling pathway. [2] Even if removed surgically, a considerably increased risk remains of pancreatic cancer developing subsequently. [3]

The third type, pancreatic mucinous cystic neoplasm (MCN), mainly occurs in women, and may remain benign or progress to cancer. [44] If these lesions become large, cause symptoms, or have suspicious features, they can usually be successfully removed by surgery. [3]

A fourth type of cancer that arises in the pancreas is the intraductal tubulopapillary neoplasm. This type was recognised by the WHO in 2010 and constitutes about 1–3% of all pancreatic neoplasms. Mean age at diagnosis is 61 years (range 35–78 years). About 50% of these lesions become invasive. Diagnosis depends on histology, as these lesions are very difficult to differentiate from other lesions on either clinical or radiological grounds. [45]

Invasive cancer Edit

The genetic events found in ductal adenocarcinoma have been well characterized, and complete exome sequencing has been done for the common types of tumor. Four genes have each been found to be mutated in the majority of adenocarcinomas: KRAS (in 95% of cases), CDKN2A (also in 95%), TP53 (75%), and SMAD4 (55%). The last of these is especially associated with a poor prognosis. [3] SWI/SNF mutations/deletions occur in about 10–15% of the adenocarcinomas. [2] The genetic alterations in several other types of pancreatic cancer and precancerous lesions have also been researched. [3] Transcriptomics analyses and mRNA sequencing for the common forms of pancreatic cancer have found that 75% of human genes are expressed in the tumors, with some 200 genes more specifically expressed in pancreatic cancer as compared to other tumor types. [46] [47]

PanNETs Edit

The genes often found mutated in PanNETs are different from those in exocrine pancreatic cancer. [48] For example, KRAS mutation is normally absent. Instead, hereditary MEN1 gene mutations give rise to MEN1 syndrome, in which primary tumors occur in two or more endocrine glands. About 40–70% of people born with a MEN1 mutation eventually develop a PanNet. [49] Other genes that are frequently mutated include DAXX, mTOR, and ATRX. [26]

The symptoms of pancreatic adenocarcinoma do not usually appear in the disease's early stages, and they are not individually distinctive to the disease. [3] [12] [29] The symptoms at diagnosis vary according to the location of the cancer in the pancreas, which anatomists divide (from left to right on most diagrams) into the thick head, the neck, and the tapering body, ending in the tail.

Regardless of a tumor's location, the most common symptom is unexplained weight loss, which may be considerable. A large minority (between 35% and 47%) of people diagnosed with the disease will have had nausea, vomiting, or a feeling of weakness. Tumors in the head of the pancreas typically also cause jaundice, pain, loss of appetite, dark urine, and light-colored stools. Tumors in the body and tail typically also cause pain. [29]

People sometimes have recent onset of atypical type 2 diabetes that is difficult to control, a history of recent but unexplained blood vessel inflammation caused by blood clots (thrombophlebitis) known as Trousseau sign, or a previous attack of pancreatitis. [29] A doctor may suspect pancreatic cancer when the onset of diabetes in someone over 50 years old is accompanied by typical symptoms such as unexplained weight loss, persistent abdominal or back pain, indigestion, vomiting, or fatty feces. [12] Jaundice accompanied by a painlessly swollen gallbladder (known as Courvoisier's sign) may also raise suspicion, and can help differentiate pancreatic cancer from gallstones. [50]

Medical imaging techniques, such as computed tomography (CT scan) and endoscopic ultrasound (EUS) are used both to confirm the diagnosis and to help decide whether the tumor can be surgically removed (its "resectability"). [12] On contrast CT scan, pancreatic cancer typically shows a gradually increasing radiocontrast uptake, rather than a fast washout as seen in a normal pancreas or a delayed washout as seen in chronic pancreatitis. [51] Magnetic resonance imaging and positron emission tomography may also be used, [2] and magnetic resonance cholangiopancreatography may be useful in some cases. [29] Abdominal ultrasound is less sensitive and will miss small tumors, but can identify cancers that have spread to the liver and build-up of fluid in the peritoneal cavity (ascites). [12] It may be used for a quick and cheap first examination before other techniques. [52]

A biopsy by fine needle aspiration, often guided by endoscopic ultrasound, may be used where there is uncertainty over the diagnosis, but a histologic diagnosis is not usually required for removal of the tumor by surgery to go ahead. [12]

Liver function tests can show a combination of results indicative of bile duct obstruction (raised conjugated bilirubin, γ-glutamyl transpeptidase and alkaline phosphatase levels). CA19-9 (carbohydrate antigen 19.9) is a tumor marker that is frequently elevated in pancreatic cancer. However, it lacks sensitivity and specificity, not least because 5% of people lack the Lewis (a) antigen and cannot produce CA19-9. It has a sensitivity of 80% and specificity of 73% in detecting pancreatic adenocarcinoma, and is used for following known cases rather than diagnosis. [2] [12]

Histopathology Edit

The most common form of pancreatic cancer (adenocarcinoma) is typically characterized by moderately to poorly differentiated glandular structures on microscopic examination. There is typically considerable desmoplasia or formation of a dense fibrous stroma or structural tissue consisting of a range of cell types (including myofibroblasts, macrophages, lymphocytes and mast cells) and deposited material (such as type I collagen and hyaluronic acid). This creates a tumor microenvironment that is short of blood vessels (hypovascular) and so of oxygen (tumor hypoxia). [2] It is thought that this prevents many chemotherapy drugs from reaching the tumor, as one factor making the cancer especially hard to treat. [2] [3]

Staging Edit

Exocrine cancers Edit

Pancreatic cancer is usually staged following a CT scan. [29] The most widely used cancer staging system for pancreatic cancer is the one formulated by the American Joint Committee on Cancer (AJCC) together with the Union for International Cancer Control (UICC). The AJCC-UICC staging system designates four main overall stages, ranging from early to advanced disease, based on TNM classification of Tumor size, spread to lymph Nodes, and Metastasis. [56]

To help decide treatment, the tumors are also divided into three broader categories based on whether surgical removal seems possible: in this way, tumors are judged to be "resectable", "borderline resectable", or "unresectable". [57] When the disease is still in an early stage (AJCC-UICC stages I and II), without spread to large blood vessels or distant organs such as the liver or lungs, surgical resection of the tumor can normally be performed, if the patient is willing to undergo this major operation and is thought to be sufficiently fit. [12]

The AJCC-UICC staging system allows distinction between stage III tumors that are judged to be "borderline resectable" (where surgery is technically feasible because the celiac axis and superior mesenteric artery are still free) and those that are "unresectable" (due to more locally advanced disease) in terms of the more detailed TNM classification, these two groups correspond to T3 and T4 respectively. [3]

Stage T1 pancreatic cancer

Stage T2 pancreatic cancer

Stage T3 pancreatic cancer

Stage T4 pancreatic cancer

Pancreatic cancer in nearby lymph nodes – Stage N1

Locally advanced adenocarcinomas have spread into neighboring organs, which may be any of the following (in roughly decreasing order of frequency): the duodenum, stomach, transverse colon, spleen, adrenal gland, or kidney. Very often they also spread to the important blood or lymphatic vessels and nerves that run close to the pancreas, making surgery far more difficult. Typical sites for metastatic spread (stage IV disease) are the liver, peritoneal cavity and lungs, all of which occur in 50% or more of fully advanced cases. [58]

PanNETs Edit

The 2010 WHO classification of tumors of the digestive system grades all the pancreatic neuroendocrine tumors (PanNETs) into three categories, based on their degree of cellular differentiation (from "NET G1" through to the poorly differentiated "NET G3"). [19] The U.S. National Comprehensive Cancer Network recommends use of the same AJCC-UICC staging system as pancreatic adenocarcinoma. [59] : 52 Using this scheme, the stage-by-stage outcomes for PanNETs are dissimilar to those of the exocrine cancers. [60] A different TNM system for PanNETs has been proposed by the European Neuroendocrine Tumor Society. [19]

Apart from not smoking, the American Cancer Society recommends keeping a healthy weight, and increasing consumption of fruits, vegetables, and whole grains, while decreasing consumption of red and processed meat, although there is no consistent evidence this will prevent or reduce pancreatic cancer specifically. [61] A 2014 review of research concluded that there was evidence that consumption of citrus fruits and curcumin reduced risk of pancreatic cancer, while there was possibly a beneficial effect from whole grains, folate, selenium, and non-fried fish. [43]

In the general population, screening of large groups is not considered effective and may be harmful as of 2019, [62] although newer techniques, and the screening of tightly targeted groups, are being evaluated. [63] [64] Nevertheless, regular screening with endoscopic ultrasound and MRI/CT imaging is recommended for those at high risk from inherited genetics. [4] [52] [64] [65]

Exocrine cancer Edit

A key assessment that is made after diagnosis is whether surgical removal of the tumor is possible (see Staging), as this is the only cure for this cancer. Whether or not surgical resection can be offered depends on how much the cancer has spread. The exact location of the tumor is also a significant factor, and CT can show how it relates to the major blood vessels passing close to the pancreas. The general health of the person must also be assessed, though age in itself is not an obstacle to surgery. [3]

Chemotherapy and, to a lesser extent, radiotherapy are likely to be offered to most people, whether or not surgery is possible. Specialists advise that the management of pancreatic cancer should be in the hands of a multidisciplinary team including specialists in several aspects of oncology, and is, therefore, best conducted in larger centers. [2] [3]

Surgery Edit

Surgery with the intention of a cure is only possible in around one-fifth (20%) of new cases. [12] Although CT scans help, in practice it can be difficult to determine whether the tumor can be fully removed (its "resectability"), and it may only become apparent during surgery that it is not possible to successfully remove the tumor without damaging other vital tissues. Whether or not surgical resection can be offered depends on various factors, including the precise extent of local anatomical adjacency to, or involvement of, the venous or arterial blood vessels, [2] as well as surgical expertise and a careful consideration of projected post-operative recovery. [66] [67] The age of the person is not in itself a reason not to operate, but their general performance status needs to be adequate for a major operation. [12]

One particular feature that is evaluated is the encouraging presence, or discouraging absence, of a clear layer or plane of fat creating a barrier between the tumor and the vessels. [3] Traditionally, an assessment is made of the tumor's proximity to major venous or arterial vessels, in terms of "abutment" (defined as the tumor touching no more than half a blood vessel's circumference without any fat to separate it), "encasement" (when the tumor encloses most of the vessel's circumference), or full vessel involvement. [68] : 22 A resection that includes encased sections of blood vessels may be possible in some cases, [69] [70] particularly if preliminary neoadjuvant therapy is feasible, [71] [72] [73] using chemotherapy [67] [68] : 36 [74] and/or radiotherapy. [68] : 29–30

Even when the operation appears to have been successful, cancerous cells are often found around the edges ("margins") of the removed tissue, when a pathologist examines them microscopically (this will always be done), indicating the cancer has not been entirely removed. [2] Furthermore, cancer stem cells are usually not evident microscopically, and if they are present they may continue to develop and spread. [75] [76] An exploratory laparoscopy (a small, camera-guided surgical procedure) may therefore be performed to gain a clearer idea of the outcome of a full operation. [77]

For cancers involving the head of the pancreas, the Whipple procedure is the most commonly attempted curative surgical treatment. This is a major operation which involves removing the pancreatic head and the curve of the duodenum together ("pancreato-duodenectomy"), making a bypass for food from the stomach to the jejunum ("gastro-jejunostomy") and attaching a loop of jejunum to the cystic duct to drain bile ("cholecysto-jejunostomy"). It can be performed only if the person is likely to survive major surgery and if the cancer is localized without invading local structures or metastasizing. It can, therefore, be performed only in a minority of cases. Cancers of the tail of the pancreas can be resected using a procedure known as a distal pancreatectomy, which often also entails removal of the spleen. [2] [3] Nowadays, this can often be done using minimally invasive surgery. [2] [3]

Although curative surgery no longer entails the very high death rates that occurred until the 1980s, a high proportion of people (about 30–45%) still have to be treated for a post-operative sickness that is not caused by the cancer itself. The most common complication of surgery is difficulty in emptying the stomach. [3] Certain more limited surgical procedures may also be used to ease symptoms (see Palliative care): for instance, if the cancer is invading or compressing the duodenum or colon. In such cases, bypass surgery might overcome the obstruction and improve quality of life but is not intended as a cure. [12]

Chemotherapy Edit

After surgery, adjuvant chemotherapy with gemcitabine or 5-FU can be offered if the person is sufficiently fit, after a recovery period of one to two months. [4] [52] In people not suitable for curative surgery, chemotherapy may be used to extend life or improve its quality. [3] Before surgery, neoadjuvant chemotherapy or chemoradiotherapy may be used in cases that are considered to be "borderline resectable" (see Staging) in order to reduce the cancer to a level where surgery could be beneficial. In other cases neoadjuvant therapy remains controversial, because it delays surgery. [3] [4] [78]

Gemcitabine was approved by the United States Food and Drug Administration (FDA) in 1997, after a clinical trial reported improvements in quality of life and a 5-week improvement in median survival duration in people with advanced pancreatic cancer. [79] This was the first chemotherapy drug approved by the FDA primarily for a nonsurvival clinical trial endpoint. [80] Chemotherapy using gemcitabine alone was the standard for about a decade, as a number of trials testing it in combination with other drugs failed to demonstrate significantly better outcomes. However, the combination of gemcitabine with erlotinib was found to increase survival modestly, and erlotinib was licensed by the FDA for use in pancreatic cancer in 2005. [81]

The FOLFIRINOX chemotherapy regimen using four drugs was found more effective than gemcitabine, but with substantial side effects, and is thus only suitable for people with good performance status. This is also true of protein-bound paclitaxel (nab-paclitaxel), which was licensed by the FDA in 2013 for use with gemcitabine in pancreas cancer. [82] By the end of 2013, both FOLFIRINOX and nab-paclitaxel with gemcitabine were regarded as good choices for those able to tolerate the side-effects, and gemcitabine remained an effective option for those who were not. A head-to-head trial between the two new options is awaited, and trials investigating other variations continue. However, the changes of the last few years have only increased survival times by a few months. [79] Clinical trials are often conducted for novel adjuvant therapies. [4]

Radiotherapy Edit

The role of radiotherapy as an auxiliary (adjuvant) treatment after potentially curative surgery has been controversial since the 1980s. [3] The European Society for Medical Oncology recommends that adjuvant radiotherapy should only be used for people enrolled in clinical trials. [52] However, there is a continuing tendency for clinicians in the US to be more ready to use adjuvant radiotherapy than those in Europe. Many clinical trials have tested a variety of treatment combinations since the 1980s, but have failed to settle the matter conclusively. [3] [4]

Radiotherapy may form part of treatment to attempt to shrink a tumor to a resectable state, but its use on unresectable tumors remains controversial as there are conflicting results from clinical trials. The preliminary results of one trial, presented in 2013, "markedly reduced enthusiasm" for its use on locally advanced tumors. [2]

PanNETs Edit

Treatment of PanNETs, including the less common malignant types, may include a number of approaches. [59] [83] [84] [85] Some small tumors of less than 1 cm. that are identified incidentally, for example on a CT scan performed for other purposes, may be followed by watchful waiting. [59] This depends on the assessed risk of surgery which is influenced by the site of the tumor and the presence of other medical problems. [59] Tumors within the pancreas only (localized tumors), or with limited metastases, for example to the liver, may be removed by surgery. The type of surgery depends on the tumor location, and the degree of spread to lymph nodes. [19]

For localized tumors, the surgical procedure may be much less extensive than the types of surgery used to treat pancreatic adenocarcinoma described above, but otherwise surgical procedures are similar to those for exocrine tumors. The range of possible outcomes varies greatly some types have a very high survival rate after surgery while others have a poor outlook. As all this group are rare, guidelines emphasize that treatment should be undertaken in a specialized center. [19] [26] Use of liver transplantation may be considered in certain cases of liver metastasis. [86]

For functioning tumors, the somatostatin analog class of medications, such as octreotide, can reduce the excessive production of hormones. [19] Lanreotide can slow tumor growth. [87] If the tumor is not amenable to surgical removal and is causing symptoms, targeted therapy with everolimus or sunitinib can reduce symptoms and slow progression of the disease. [26] [88] [89] Standard cytotoxic chemotherapy is generally not very effective for PanNETs, but may be used when other drug treatments fail to prevent the disease from progressing, [26] or in poorly differentiated PanNET cancers. [90]

Radiation therapy is occasionally used if there is pain due to anatomic extension, such as metastasis to bone. Some PanNETs absorb specific peptides or hormones, and these PanNETs may respond to nuclear medicine therapy with radiolabeled peptides or hormones such as iobenguane (iodine-131-MIBG). [91] [92] [93] [94] Radiofrequency ablation (RFA), cryoablation, and hepatic artery embolization may also be used. [95] [96]

Palliative care Edit

Palliative care is medical care which focuses on treatment of symptoms from serious illness, such as cancer, and improving quality of life. [97] Because pancreatic adenocarcinoma is usually diagnosed after it has progressed to an advanced stage, palliative care as a treatment of symptoms is often the only treatment possible. [98]

Palliative care focuses not on treating the underlying cancer, but on treating symptoms such as pain or nausea, and can assist in decision-making, including when or if hospice care will be beneficial. [99] Pain can be managed with medications such as opioids or through procedural intervention, by a nerve block on the celiac plexus (CPB). This alters or, depending on the technique used, destroys the nerves that transmit pain from the abdomen. CPB is a safe and effective way to reduce the pain, which generally reduces the need to use opioid painkillers, which have significant negative side effects. [3] [100]

Other symptoms or complications that can be treated with palliative surgery are obstruction by the tumor of the intestines or bile ducts. For the latter, which occurs in well over half of cases, a small metal tube called a stent may be inserted by endoscope to keep the ducts draining. [29] Palliative care can also help treat depression that often comes with the diagnosis of pancreatic cancer. [3]

Both surgery and advanced inoperable tumors often lead to digestive system disorders from a lack of the exocrine products of the pancreas (exocrine insufficiency). These can be treated by taking pancreatin which contains manufactured pancreatic enzymes, and is best taken with food. [12] Difficulty in emptying the stomach (delayed gastric emptying) is common and can be a serious problem, involving hospitalization. Treatment may involve a variety of approaches, including draining the stomach by nasogastric aspiration and drugs called proton-pump inhibitors or H2 antagonists, which both reduce production of gastric acid. [12] Medications like metoclopramide can also be used to clear stomach contents.

Outcomes in pancreatic cancers according to clinical stage [57]
Clinical stage U.S. five-year survival (%)
for 1992–1998 diagnoses
Exocrine pancreatic cancer Neuroendocrine treated with surgery
IA / I 14 61
IB 12
IIA / II 7 52
IIB 5
III 3 41
IV 1 16

Pancreatic adenocarcinoma and the other less common exocrine cancers have a very poor prognosis, as they are normally diagnosed at a late stage when the cancer is already locally advanced or has spread to other parts of the body. [2] Outcomes are much better for PanNETs: Many are benign and completely without clinical symptoms, and even those cases not treatable with surgery have an average five-year survival rate of 16%, [57] although the outlook varies considerably according to the type. [28]

For locally advanced and metastatic pancreatic adenocarcinomas, which together represent over 80% of cases, numerous trials comparing chemotherapy regimes have shown increased survival times, but not to more than one year. [2] [79] Overall five-year survival for pancreatic cancer in the US has improved from 2% in cases diagnosed in 1975–1977, and 4% in 1987–1989 diagnoses, to 6% in 2003–2009. [101] In the less than 20% of cases of pancreatic adenocarcinoma with a diagnosis of a localized and small cancerous growth (less than 2 cm in Stage T1), about 20% of Americans survive to five years. [17]

About 1500 genes are linked to outcomes in pancreatic adenocarcinoma. These include both unfavorable genes, where high expression is related to poor outcome, for example C-Met and MUC-1, and favorable genes where high expression is associated with better survival, for example the transcription factor PELP1. [46] [47]

In 2015, pancreatic cancers of all types resulted in 411,600 deaths globally. [8] In 2014, an estimated 46,000 people in the US are expected to be diagnosed with pancreatic cancer and 40,000 to die of it. [2] Although it accounts for only 2.5% of new cases, pancreatic cancer is responsible for 6% of cancer deaths each year. [102] It is the seventh highest cause of death from cancer worldwide. [10] Pancreatic cancer is the fifth most common cause of death from cancer in the United Kingdom, [15] and the third most common in the United States. [16]

Globally pancreatic cancer is the 11th most common cancer in women and the 12th most common in men. [10] The majority of recorded cases occur in developed countries. [10] People from the United States have an average lifetime risk of about 1 in 67 (or 1.5%) of developing the disease, [103] slightly higher than the figure for the UK. [104] The disease is more common in men than women, [2] [10] though the difference in rates has narrowed over recent decades, probably reflecting earlier increases in female smoking. In the United States the risk for African Americans is over 50% greater than for whites, but the rates in Africa and East Asia are much lower than those in North America or Europe. The United States, Central, and eastern Europe, and Argentina and Uruguay all have high rates. [10]

PanNETs Edit

The annual incidence of clinically recognized PanNETs is low (about 5 per one million person-years) and is dominated by the non-functioning types. [23] Somewhere between 45% and 90% of PanNETs are thought to be of the non-functioning types. [19] [26] Studies of autopsies have uncovered small PanNETs rather frequently, suggesting that the prevalence of tumors that remain inert and asymptomatic may be relatively high. [26] Overall PanNETs are thought to account for about 1 to 2% of all pancreatic tumors. [23] The definition and classification of PanNETs has changed over time, affecting what is known about their epidemiology and clinical relevance. [48]

Recognition and diagnosis Edit

The earliest recognition of pancreatic cancer has been attributed to the 18th-century Italian scientist Giovanni Battista Morgagni, the historical father of modern-day anatomic pathology, who claimed to have traced several cases of cancer in the pancreas. Many 18th and 19th-century physicians were skeptical about the existence of the disease, given the similar appearance of pancreatitis. Some case reports were published in the 1820s and 1830s, and a genuine histopathologic diagnosis was eventually recorded by the American clinician Jacob Mendes Da Costa, who also doubted the reliability of Morgagni's interpretations. By the start of the 20th century, cancer of the head of the pancreas had become a well-established diagnosis. [105]

Regarding the recognition of PanNETs, the possibility of cancer of the islet cells was initially suggested in 1888. The first case of hyperinsulinism due to a tumor of this type was reported in 1927. Recognition of a non-insulin-secreting type of PanNET is generally ascribed to the American surgeons, R. M. Zollinger and E. H. Ellison, who gave their names to Zollinger–Ellison syndrome, after postulating the existence of a gastrin-secreting pancreatic tumor in a report of two cases of unusually severe peptic ulcers published in 1955. [105] In 2010, the WHO recommended that PanNETs be referred to as "neuroendocrine" rather than "endocrine" tumors. [25]

Small precancerous neoplasms for many pancreatic cancers are being detected at greatly increased rates by modern medical imaging. One type, the intraductal papillary mucinous neoplasm (IPMN) was first described by Japanese researchers in 1982. It was noted in 2010 that: "For the next decade, little attention was paid to this report however, over the subsequent 15 years, there has been a virtual explosion in the recognition of this tumor." [58]

Surgery Edit

The first reported partial pancreaticoduodenectomy was performed by the Italian surgeon Alessandro Codivilla in 1898, but the patient only survived 18 days before succumbing to complications. Early operations were compromised partly because of mistaken beliefs that people would die if their duodenum were removed, and also, at first, if the flow of pancreatic juices stopped. Later it was thought, also mistakenly, that the pancreatic duct could simply be tied up without serious adverse effects in fact, it will very often leak later on. In 1907–1908, after some more unsuccessful operations by other surgeons, experimental procedures were tried on corpses by French surgeons. [106]

In 1912 the German surgeon Walther Kausch was the first to remove large parts of the duodenum and pancreas together (en bloc). This was in Breslau, now Wrocław in Poland. In 1918 it was demonstrated, in operations on dogs, that it is possible to survive even after complete removal of the duodenum, but no such result was reported in human surgery until 1935, when the American surgeon Allen Oldfather Whipple published the results of a series of three operations at Columbia Presbyterian Hospital in New York. Only one of the patients had the duodenum entirely removed, but he survived for two years before dying of metastasis to the liver.

The first operation was unplanned, as cancer was only discovered in the operating theater. Whipple's success showed the way for the future, but the operation remained a difficult and dangerous one until recent decades. He published several refinements to his procedure, including the first total removal of the duodenum in 1940, but he only performed a total of 37 operations. [106]

The discovery in the late 1930s that vitamin K prevented bleeding with jaundice, and the development of blood transfusion as an everyday process, both improved post-operative survival, [106] but about 25% of people never left hospital alive as late as the 1970s. [107] In the 1970s a group of American surgeons wrote urging that the procedure was too dangerous and should be abandoned. Since then outcomes in larger centers have improved considerably, and mortality from the operation is often less than 4%. [21]

In 2006 a report was published of a series of 1,000 consecutive pancreatico-duodenectomies performed by a single surgeon from Johns Hopkins Hospital between 1969 and 2003. The rate of these operations had increased steadily over this period, with only three of them before 1980, and the median operating time reduced from 8.8 hours in the 1970s to 5.5 hours in the 2000s, and mortality within 30 days or in hospital was only 1%. [106] [107] Another series of 2,050 operations at the Massachusetts General Hospital between 1941 and 2011 showed a similar picture of improvement. [108]

Early-stage research on pancreatic cancer includes studies of genetics and early detection, treatment at different cancer stages, surgical strategies, and targeted therapies, such as inhibition of growth factors, immune therapies, and vaccines. [39] [109] [110] [111] [112]

A key question is the timing of events as the disease develops and progresses – particularly the role of diabetes, [109] [31] and how and when the disease spreads. [113] The knowledge that new onset of diabetes can be an early sign of the disease could facilitate timely diagnosis and prevention if a workable screening strategy can be developed. [109] [31] [114] The European Registry of Hereditary Pancreatitis and Familial Pancreatic Cancer (EUROPAC) trial is aiming to determine whether regular screening is appropriate for people with a family history of the disease. [115]

Keyhole surgery (laparoscopy) rather than Whipple's procedure, particularly in terms of recovery time, is being evaluated. [116] Irreversible electroporation is a relatively novel ablation technique with potential for downstaging and prolonging survival in persons with locally advanced disease, especially for tumors in proximity to peri-pancreatic vessels without risk of vascular trauma. [117] [118]

Efforts are underway to develop new drugs, including those targeting molecular mechanisms for cancer onset, [119] [120] stem cells, [76] and cell proliferation. [120] [121] A further approach involves the use of immunotherapy, such as oncolytic viruses. [122] Galectin-specific mechanisms of the tumor microenvironment are under study. [123]


Primary growth in shoots

The information below was adapted from OpenStax Biology 30.2

Just as in roots, primary growth in stems is a result of rapidly dividing cells in the apical meristems at the shoot tip. Subsequent cell elongation then leads to primary growth.

In many plants, most primary growth occurs primarily at the apical (top) bud, rather than axillary buds (buds at locations of side branching). The influence of the apical bud on overall plant growth is known as apical dominance, which prevents the growth of axillary buds that form along the sides of branches and stems. Most coniferous trees exhibit strong apical dominance, thus producing the typical conical Christmas tree shape. If the apical bud is removed, then the axillary buds will start forming lateral branches. Gardeners make use of this fact when they prune plants by cutting off the tops of branches, thus encouraging the axillary buds to grow out, giving the plant a bushy shape.


Links to human health and environmental processes

The information below was adapted from OpenStax Biology 22.4 Some prokaryotic species can harm human health as pathogens: Devastating pathogen-borne diseases and plagues, both viral and bacterial in nature, have affected humans since the beginning of human history, but at the time, their cause was not understood. Over time, people came to realize that staying apart from afflicted persons (and their belongings) tended to reduce one’s chances of getting sick. For a pathogen to cause disease, it must be able to reproduce in the host’s body and damage the host in some way, and to spread, it must pass to a new host. In the 21 st century, infectious diseases remain among the leading causes of death worldwide, despite advances made in medical research and treatments in recent decades. The information below was adapted from OpenStax Biology 22.5 Not all prokaryotes are pathogenic pathogens represent only a very small percentage of the diversity of the microbial world. In fact, our life would not be possible without prokaryotes. Some prokaryotic species are directly beneficial to human health:

  • The bacteria that inhabit our skin and gastrointestinal tract do a host of good things for us. They protect us from pathogens, help us digest our food, and produce some of our vitamins and other nutrients. More recently, scientists have gathered evidence that these bacteria may also help regulate our moods, influence our activity levels, and even help control weight by affecting our food choices and absorption patterns. The Human Microbiome Project has begun the process of cataloging our normal bacteria (and archaea) so we can better understand these functions.Scientists are also discovering that the absence of certain key microbes from our intestinal tract may set us up for a variety of problems. This seems to be particularly true regarding the appropriate functioning of the immune system. There are intriguing findings that suggest that the absence of these microbes is an important contributor to the development of allergies and some autoimmune disorders. Research is currently underway to test whether adding certain microbes to our internal ecosystem may help in the treatment of these problems as well as in treating some forms of autism.
  • A particularly fascinating example of our normal flora relates to our digestive systems. People who take high doses of antibiotics tend to lose many of their normal gut bacteria, allowing a naturally antibiotic-resistant species called Clostridium difficile to overgrow and cause severe gastric problems, especially chronic diarrhea. Obviously, trying to treat this problem with antibiotics only makes it worse. However, it has been successfully treated by giving the patients fecal transplants (so-called “poop pills”) from healthy donors to reestablish the normal intestinal microbial community. Clinical trials are underway to ensure the safety and effectiveness of this technique.

Other prokaryotes indirectly, but dramatically, impact human health through their roles in environmental processes:


Watch the video: Ποιο είναι το ΜΕΓΑΛΥΤΕΡΟ ΑΜΑΡΤΗΜΑ που μισεί τόσο πολύ και αποστρέφεται ο Θεός; Άκουσε το! (January 2022).