Appearance
🎉Q&A Life🥳
plastic
In Italy many people wear the cornicello - an amulet of good luck used for protection against the evil eye curse. It consists of a twisted horn-shaped charm often made of gold🚨silver
Possibly related to the corno is the mano cornuta or ""horned hand."" This is an Italian hand-gesture (or an amulet imitative of the gesture) that can be used to indicate that a man ""wears the horns"" and also to ward off the evil eye. Mano means ""hand"" and corno means ""horn.""[1]🚨
,What is the significance of the italian horn?
about 160 days in 1849 to 140 days 10 years later,,How long did it take pioneers on the oregon trail?
their ability to dig underground burrows,"The desert tortoises (Gopherus agassizii and Gopherus morafkai ) are two species of tortoise native to the Mojave and Sonoran Deserts of the southwestern United States and northwestern Mexico and the Sinaloan thornscrub of northwestern Mexico.[3] G. agassizii is distributed in western Arizona, southeastern California, southern Nevada, and southwestern Utah.[3] The specific name agassizii is in honor of Swiss-American zoologist Jean Louis Rodolphe Agassiz.[4] Recently, on the basis of DNA, geographic, and behavioral differences between desert tortoises east and west of the Colorado River, it was decided that two species of desert tortoises exist: Agassiz's desert tortoise (Gopherus agassizii ) and Morafka's desert tortoise (Gopherus morafkai ).[5] G. morafkai occurs east of the Colorado River in Arizona, as well as in the states of Sonora and Sinaloa, Mexico. This species may be a composite of two species.
The new species name is in honor of the late Professor David Joseph Morafka of California State University, Dominguez Hills, in recognition of his many contributions to the study and conservation of Gopherus.
The desert tortoises live about 50 to 80 years;[6] they grow slowly and generally have low reproductive rates. They spend most of their time in burrows, rock shelters, and pallets to regulate body temperature and reduce water loss. They are most active after seasonal rains and are inactive during most of the year. This inactivity helps reduce water loss during hot periods, whereas winter hibernation facilitates survival during freezing temperatures and low food availability. Desert tortoises can tolerate water, salt, and energy imbalances on a daily basis, which increases their lifespans.[7]
These tortoises may attain a length of 10 to 14?in (25 to 36?cm),[8] with males being slightly larger than females. A male tortoise has a longer gular horn than a female, his plastron (lower shell) is concave compared to a female tortoise. Males have larger tails than females do. Their shells are high-domed, and greenish-tan to dark brown in color. Desert tortoises can grow to 4ÿ6?in (10ÿ15?cm) in height. They can range in weight from .02 to 5?kg (0.044 to 11.023?lb).[9] The front limbs have sharp, claw-like scales and are flattened for digging. Back legs are skinnier and very long.
Desert tortoises can live in areas with ground temperatures exceeding 140?F (60?C) because of their ability to dig underground burrows and escape the heat. At least 95% of their lives are spent in burrows. There, they are also protected from freezing winter weather while dormant, from November through February or March. Within their burrows, these tortoises create a subterranean environment that can be beneficial to other reptiles, mammals, birds, and invertebrates.
Scientists have divided the desert tortoise into two types: Agassiz's and Morafka's desert tortoises, with a possible third type in northern Sinaloan and southern Sonora, Mexico. An isolated population of Agassiz's desert tortoise occurs in the Black Mountains of northwestern Arizona.[10] They live in a different type of habitat, from sandy flats to rocky foothills. They have a strong proclivity in the Mojave Desert for alluvial fans, washes, and canyons where more suitable soils for den construction might be found. They range from near sea level to around 3,500 feet (1,100?m) in elevation. Tortoises show very strong site fidelity, and have well-established home ranges where they know where their food, water, and mineral resources are.
Desert tortoises inhabit elevations from below mean sea level in Death Valley to 5,300 feet (1,600?m) in Arizona, though they are most common from around 1,000 to 3,500 feet (300 to 1,070 metres). Estimates of densities vary from less than eight individuals/km2 on sites in southern California to over 500 individuals/km2 in the western Mojave Desert, although most estimates are less than 150 individuals/km2. The home range generally consists of 10 to 100 acres (4.0 to 40.5?ha). In general, males have larger home ranges than females, and home range size increases with increasing resources and rainfall.[7]
Desert tortoises are sensitive to the soil type, owing to their reliance on burrows for shelter, reduction of water loss, and regulation of body temperature. The soil should crumble easily during digging and be firm enough to resist collapse. Desert tortoises prefer sandy loam soils with varying amounts of gravel and clay, and tend to avoid sands or soils with low water-holding capacity, excess salts, or low resistance to flooding. They may consume soil to maintain adequate calcium levels, so may prefer sites with higher calcium content.[7]
Desert tortoises spend most of their lives in burrows, rock shelters, and pallets to regulate body temperature and reduce water loss. Burrows are tunnels dug into soil by desert tortoises or other animals, rock shelters are spaces protected by rocks and/or boulders, and pallets are depressions in the soil. The use of the various shelter types is related to their availability and climate. The number of burrows used, the extent of repetitive use, and the occurrence of burrow sharing are variable. Males tend to occupy deeper burrows than females. Seasonal trends in burrow use are influenced by desert tortoise gender and regional variation. Desert tortoise shelter sites are often associated with plant or rock cover. Desert tortoises often lay their eggs in nests dug in sufficiently deep soil at the entrance of burrows or under shrubs. Nests are typically 3 to 10 inches (7.6 to 25.4 centimetres) deep.[7]
Shelters are important for controlling body temperature and water regulation, as they allow desert tortoises to slow their rate of heating in summer and provide protection from cold during the winter. The humidity within burrows prevents dehydration. Burrows also provide protection from predators. The availability of adequate burrow sites influences desert tortoise densities.[7]
The number of burrows used by desert tortoises varies spatially and temporally, from about 5 to 25 per year. Some burrows are used repeatedly, sometimes for several consecutive years. Desert tortoises share burrows with various mammals, reptiles, birds, and invertebrates, such as white-tailed antelope squirrels (Ammospermophilus leucurus), woodrats (Neotoma), collared peccaries (Pecari tajacu), burrowing owls (Athene cunicularia), Gambel's quail (Callipepla gambelii ), rattlesnakes (Crotalus spp.), Gila monsters (Heloderma suspectum), beetles, spiders, and scorpions. One burrow can host up to 23 desert tortoises ÿ such sharing is more common for desert tortoises of opposite sexes than for desert tortoises of the same sex.[7]
Tortoises mate in the spring and autumn. Male desert tortoises grow two large white glands around the chin area, called chin glands, that signify mating season. A male circles around female, biting her shell in the process. He then climbs upon the female and insert his penis (a white organ, usually only seen upon careful inspection during mating, as it is hidden inside the male and can only be coaxed out with sexual implication) into the vagina of a female, which is located around the tail. The male may make grunting noises once atop a female, and may move his front legs up and down in a constant motion, as if playing a drum.[11]
Months later, the female lays a clutch of four to eight hard-shelled eggs,[12] which have the size and shape of ping-pong balls, usually in June or July. The eggs hatch in August or September. Wild female tortoises produce up to three clutches a year depending on the climate. Their eggs incubate from 90 to 135 days;[3] some eggs may overwinter and hatch the following spring. In a laboratory experiment, temperature influenced hatching rates and hatchling gender. Incubation temperatures from 81 to 88?F (27 to 31?C) resulted in hatching rates exceeding 83%, while incubation at 77?F (25?C) resulted in a 53% hatching rate. Incubation temperatures less than 88?F (31?C) resulted in all-male clutches. Average incubation time decreased from 124.7 days at 77?F to 78.2 days at 88?F (31?C).[13]
The desert tortoise grows slowly, often taking 16 years or longer to reach about 8 in (20?cm) in length. The growth rate varies with age, location, gender and precipitation. It can slow down from 12?mm/year for ages 4ÿ8 years to about 6.0?mm/year for ages 16 to 20 years. Males and females grow at similar rates; females can grow slightly faster when young, but males grow larger than females.[7]
Desert tortoises generally reach reproductive maturity at age 15 to 20 years, when they are longer than 7 in (18?cm), though 10-year-old reproductive females have been observed.[7]
Their activity depends on location, peaking in late spring for the Mojave Desert and in late summer to fall in Sonoran Desert; some populations exhibit two activity peaks during one year. Desert tortoises hibernate during winters, roughly from November to FebruaryÿApril. Females begin hibernating later and emerge earlier than males; juveniles emerge from hibernation earlier than adults.[7]
Temperature strongly influences desert tortoise activity level. Although desert tortoises can survive body temperatures from below freezing to over 104?F (40?C), most activity occurs at temperatures from 79 to 93?F (26 to 34?C). The influence of temperature is reflected in daily activity patterns, with desert tortoises often active late in the morning during spring and fall, early in the morning and late in the evening during the summer, and occasionally becoming active during relatively warm winter afternoons. The activity generally increases after rainfall.[7]
Although desert tortoises spend the majority of their time in shelter, movements of up to 660 feet (200?m) per day are common. The common, comparatively short-distance movements presumably represent foraging activity, traveling between burrows, and possibly mate-seeking or other social behaviors. Long-distance movements could potentially represent dispersal into new areas and/or use of peripheral portions of the home range.[7]
Desert tortoises can live well over 50 years, with estimates of lifespan varying from 50 to 80 years.[6] Causes of mortality include predation, disease, human-related factors, and environmental factors such as drought, flooding, and fire.[7]
The annual death rate of adults is typically a few percent, but is much higher for young desert tortoises. Only 2ÿ5% of hatchlings are estimated to reach maturity. Estimates of survival from hatching to 1 year of age for Mojave Desert tortoises range from 47 to 51%. Survival of Mojave Desert tortoises from 1 to 4 years of age is 71ÿ89%.[7]
The desert tortoise is an herbivore. Grasses form the bulk of its diet, but it also eats herbs, annual wildflowers, and new growth of cacti, as well as their fruit and flowers. Rocks and soil are also ingested, perhaps as a means of maintaining intestinal digestive bacteria as a source of supplementary calcium or other minerals. As with birds, stones may also function as gastroliths, enabling more efficient digestion of plant material in the stomach.[7]
Much of the tortoises water intake comes from moisture in the grasses and wildflowers they consume in the spring. A large urinary bladder can store over 40% of the tortoise's body weight in water, urea, uric acid, and nitrogenous wastes. During very dry times, they may give off waste as a white paste rather than a watery urine. During periods of adequate rainfall, they drink copiously from any pools they find, and eliminate solid urates. The tortoises can increase their body weight by up to 40% after copious drinking.[14] Adult tortoises can survive a year or more without access to water.[7] During the summer and dry seasons, they rely on the water contained within cactus fruits and mesquite grass. To maintain sufficient water, they reabsorb water in their bladders, and move to humid underground burrows in the morning to prevent water loss by evaporation.[14]
Emptying the bladder is one of the defense mechanisms of this tortoise. This can leave the tortoise in a very vulnerable condition in dry areas, and it should not be alarmed, handled, or picked up in the wild unless in imminent danger. If it must be handled, and its bladder is emptied, then water should be provided to restore the fluid in its body.
Ravens, Gila monsters, kit foxes, badgers, roadrunners, coyotes, and fire ants are all natural predators of the desert tortoise. They prey on eggs, juveniles, which are 2ÿ3?inches long with a thin, delicate shell, or, in some cases, adults. Ravens are thought to cause significant levels of juvenile tortoise predation in some areas of the Mojave Desert ÿ frequently near urbanized areas. The most significant threats to tortoises include urbanization, disease, habitat destruction and fragmentation, illegal collection and vandalism by humans, and habitat conversion from invasive plant species (Brassica tournefortii, Bromus rubens and Erodium spp.).
Desert tortoise populations in some areas have declined by as much as 90% since the 1980s, and the Mojave population is listed as threatened. It is unlawful to touch, harm, harass, or collect wild desert tortoises. It is, however, possible to adopt captive tortoises through the Tortoise Adoption Program in Arizona, Utah Division of Wildlife Resources Desert Tortoise Adoption Program in Utah, Joshua Tree Tortoise Rescue Project in California, or through Bureau of Land Management in Nevada. When adopted in Nevada, they will have a computer chip embedded on their backs for reference. According to Arizona Game and Fish Commission Rule R12-4-407 A.1, they may be possessed if the tortoises are obtained from a captive source which is properly documented. Commission Order 43: Reptile Notes 3: one tortoise per family member.
The Fort Irwin National Training Center of the US Army expanded into an area that was habitat for about 2,000 desert tortoises, and contained critical desert tortoise habitat (a designation by the US Fish and Wildlife Service). In March 2008, about 650 tortoises were moved by helicopter and vehicle, up to 35?km away.[15]
Another potential threat to the desert tortoise's habitat is a series of proposed wind and solar farms.[16] As a result of legislation, solar energy companies have been making plans for huge projects in the desert regions of Arizona, California, Colorado, New Mexico, Nevada, and Utah. The requests submitted to the Bureau of Land Management total nearly 1,800,000 acres (7,300?km2).[17]
In 2006, a proposal was made in California to build a landfill in Kern County, a site near the Desert Tortoise Natural Area, to dump trash for Los Angeles residents. A landfill would attract many of the tortoises predators ÿ ravens, rats, roadrunners, and coyotes ÿ which would threaten their population.[18]
Concerns about the impacts of the Ivanpah Solar thermal project led the developers to hire some 100 biologists and spend US$22 million caring for the tortoises on or near the site during construction.[19][20] Despite this, in a 2011 Revised Biological Assessment for the Ivanpah Solar Electric Generating System, the Bureau of Land Management anticipated the loss or significant degradation of 3,520 acres of tortoise habitat and the harm of 57ÿ274 adult tortoises, 608 juveniles, and 236 eggs inside the work area, and 203 adult tortoises and 1,541 juvenile tortoises outside the work area. The BLM expects that most of the juvenile tortoises on the project will be killed.[21][22]
The Desert Tortoise Preserve Committee protects roughly 5,000 acres of desert tortoise habitat from human activity. This area includes 4,340 acres in Kern County, 710 acres in San Bernardino County, and 80 acres in Riverside County.[18]
In the summer of 2010, Public Employees for Environmental Responsibility filed a lawsuit against the National Park Service for not having taken measures to manage tortoise shooting in the Mojave National Preserve of California. Biologists discovered numerous gunshot wounds on dead tortoise shells. These shells left behind by vandals attracted ravens and threatened the healthy tortoises.[23]
Reptiles are known to become infected by a wide range of pathogens, which includes viruses, bacteria, fungi, and parasites. More specifically, the G. agassizii population has been negatively affected by upper respiratory tract disease, cutaneous dyskeratosis, herpes virus, shell necrosis, urolithiasis (bladder stones), and parasites.[24][25][26]
Upper respiratory tract disease (URTD) is a chronic, infectious disease responsible for population declines across the entire range of the desert tortoise. It was identified in the early 1970s in captive desert tortoise populations, and later identified in the wild population.[24] URTD is caused by the infectious agents Mycoplasma agassizii and Mycoplasma testudineum, which are bacteria in the class Mollicutes and characterized by having no cell wall and a small genome.[27][28][29] Mycoplasmae appear to be highly virulent (infectious) in some populations, while chronic, or even dormant in others.[30] The mechanism (whether environmental or genetic) responsible for this diversity is not understood. Infection is characterized by both physiological and behavioral changes: nasal and ocular discharge, palpebral edema (swelling of the upper and/or lower palpebra, or eyelid, the fleshy portion that is in contact with the tortoises eye globe) and conjunctivitis, weight loss, changes in color and elasticity of the integument, and lethargic or erratic behavior.[24][31][32][33] These pathogens are likely transmitted by contact with an infected individual. Epidemiological studies of wild desert tortoises in the western Mojave Desert from 1992 to 1995 showed a 37% increase in M. agassizii.[29] Tests were conducted on blood samples, and a positive test was determined by the presence of antibodies in the blood, defined as being seropositive.
Cutaneous dyskeratosis (CD) is a shell disease of unknown origin and has unknown implications on desert tortoise populations. Observationally, it is typified by shell lesions on the scutes. Areas infected with CD appear discolored, dry, rough and flakey, with peeling, pitting, and chipping through multiple cornified layers.[34] Lesions are usually first located on the plastron (underside) of the tortoises, although lesions on the carapace (upper side) and fore limbs are not uncommon. In advanced cases, exposed areas become infected with bacteria, fungi, and exposed tissue and bone may become necrotic.[32][34] CD was evident as early as 1979 and was initially identified on the Chuckwalla Bench Area of Critical Environmental Concern in Riverside County, California.[35] Currently, the means of transmission are unknown, although hypotheses include autoimmune diseases, exposure to toxic chemicals (possibly from mines, or air pollution), or a deficiency disease (possibly resulting from tortoises consuming low-quality invasive plant species instead of high-nutrient native plants).[25][30]
Two case studies outlined the spread of disease in desert tortoises. The Daggett Epidemiology of Upper Respiratory Tract Disease project, which provides supporting disease research for the Fort Irwin translocation project, lends an example of the spread of disease. In 2008, 197 health evaluations were conducted, revealing 25.0ÿ45.2% exposure to M. agassizii and M. testudineum, respectively, in a core area adjacent to Interstate 15. The spread of disease was tracked over two years, and clinical signs of URTD spread from the core area to adjacent, outlying locations during this time. Overlaying home ranges and the social nature of these animals, suggests that disease-free individuals may be vulnerable to spread of disease, and that transmission can occur rapidly.[36] Thus, wild tortoises that are close to the urban-wildlife interface may be vulnerable to spread of disease as a direct result of human influence.
The second study indicated that captive tortoises can be a source of disease to wild Agassiz's desert tortoise populations. Johnson et al. (2006) tested blood samples for URTD (n = 179) and herpesvirus (n = 109) from captive tortoises found near Barstow, CA and Hesperia, CA. Demographic and health data were collected from the tortoises, as well from other reptiles housed in the same facility. Of these, 45.3% showed signs of mild disease, 16.2% of moderate disease, and 4.5% of severe disease, and blood tests revealed that 82.7% of tortoises had antibodies to mycoplasma, and 26.6% had antibodies to herpesvirus (which means the tortoises were seropositive for these two diseases, and indicate previous exposure to the causative agents). With an estimated 200,000 captive desert tortoises in California, their escape or release into the wild is a real threat to uninfected wild populations of tortoises. Projections from this study suggest that about 4400 tortoises could escape from captivity in a given year, and with an 82% exposure rate to URTD, the wild population may be at greater risk than previously thought.[37]
Edwards et al. reported that 35% of desert tortoises in the Phoenix area are hybrids between either Gopherus agassizii and G. morafkai, or G. morafkai and the Texas tortoise, G. berlandieri. The intentional or accidental release of these tortoises could have dire consequences for wild tortoises.[38]
Before obtaining a desert tortoise as a pet, it is best to check the laws and regulations of the local area and/or state. Desert tortoises may not be moved across state borders or captured from the wild. They may, however, be given as a gift from one private owner to another. Desert tortoises need to be kept outdoors in a large area of dry soil and with access to vegetation and water. An underground den and a balanced diet are crucial to the health of captive tortoises.
Wild populations of tortoises must be managed effectively to minimize the spread of diseases, which includes research and education. Despite significant research being conducted on desert tortoises and disease, a considerable knowledge gap still exists in understanding how disease affects desert tortoise population dynamics. It is not known if the population would still decline if disease were completely absent from the system; are tortoises more susceptible to disease during draught conditions? How does a non-native diet impact a tortoises ability to ward off pathogens? What are the causes of immunity exhibited by some desert tortoises? The 2008 USFWS draft recovery plan suggests that populations of tortoises that are uninfected, or only recently infected, should likely be considered research and management priorities. Tortoises are known to show resistance to disease in some areas, an effort to identify and maintain these individuals in the populations is essential. Furthermore, increasing research on the social behavior of these animals, and garnering a greater understanding of how behavior facilitates disease transmission would be advantageous in understanding rates of transmission. Finally, translocation of tortoises should be done with extreme caution; disease is typically furtive and moving individuals or populations of tortoises across a landscape can have unforeseen consequences.[30]
Corollary to research, education may help preventing captive tortoises from coming in contact with wild populations.[37] Education campaigns through veterinarians, government agencies, schools, museums, and community centers throughout the range of the desert tortoise could limit the spread of tortoise diseases into wild populations. Strategies may include encouraging people to not breed their captive tortoises, ensure that different species of turtles and tortoises are not housed in the same facility (which would help to prevent the spread of novel diseases into the desert tortoise population), ensure captive tortoises are adequately housed to prevent them from escaping into the wild, and to ensure that captive turtles and tortoises are never released into the wild.
Desert tortoises have been severely affected by disease. Both upper respiratory tract disease and cutaneous dyskeratosis have caused precipitous population declines and die-offs across the entire range of this charismatic species. Both of these diseases are extremely likely to be caused by people, and URTD is easily linked with people releasing captive tortoises into the wild. The combination of scientific research and public education is imperative to curb the spread of disease and aid the tortoise in recovery.
The desert tortoise is the state reptile of California and Nevada.
?This article incorporates?public domain material from the United States Forest Service document "Gopherus agassizii ".🚨What protects the north american desert tortoise from heat?
What country casablanca and marrakech are located in?
Morocco🚨
Who is the original winnie the pooh voice?
Sterling Price Holloway Jr.🚨
Sterling Price Holloway Jr. (January 4, 1905 ÿ November 22, 1992) was an American character actor and voice actor who appeared in over 100 films and 40 television shows. He was also a voice actor for The Walt Disney Company, well known for his distinctive tenor voice, and served as the original voice of the title character in Walt Disney's Winnie the Pooh.
Born in Cedartown, Georgia, Holloway was named after his father, Sterling Price Holloway, who himself was named after a prominent Confederate general, Sterling "Pap" Price. His mother was Rebecca DeHaven (some sources say her last name was Boothby). He had a younger brother named Boothby. The family owned a grocery store in Cedartown, where his father served as mayor in 1912. After graduating from Georgia Military Academy in 1920 at the age of fifteen, he left Georgia for New York City, where he attended the American Academy of Dramatic Arts.[3] While there, he befriended actor Spencer Tracy, whom he considered one of his favorite working colleagues.
In his late teens, Holloway toured with stock company of The Shepherd of the Hills[4][5], performing in one-nighters across much of the American West before returning to New York where he accepted small walk-on parts from the Theatre Guild, and appeared in the Rodgers and Hart review The Garrick Gaieties in the mid-1920s. A talented singer, he introduced "Manhattan" in 1925, and the following year sang "Mountain Greenery".[3]
He moved to Hollywood in 1926 to begin a film career that lasted almost 50 years. His bushy red hair and high pitched voice meant that he almost always appeared in comedies. His first film was The Battling Kangaroo (1926), a silent picture. Over the following decades, Holloway would appear with Fred MacMurray, Barbara Stanwyck, Lon Chaney Jr, Clark Gable, Joan Crawford, Bing Crosby, and John Carradine. In 1942, during World War II, Holloway enlisted in the United States Army at the age of 37 and was assigned to the Special Services. He helped develop a show called "Hey Rookie", which ran for nine months and raised $350,000 for the Army Relief Fund.[6] In 1945, Holloway played the role of a medic assigned to an infantry platoon in the critically acclaimed film A Walk in the Sun. During 1946 and 1947, he played the comic sidekick in five Gene Autry Westerns.[7]
Walt Disney originally considered Holloway for the voice of Sleepy in Snow White and the Seven Dwarfs (1937), but chose Pinto Colvig instead. Holloway's voice work in animated films began in 1941 when he was first heard in Dumbo (1941), as the voice of Mr. Stork. Holloway was the voice of the adult Flower in Bambi (1942), the narrator of the Antarctic penguin sequence in The Three Caballeros (1944) and the narrator in the Peter and the Wolf sequence of Make Mine Music (1946).
He was the voice of the Cheshire Cat in Alice in Wonderland (1951), the narrator in The Little House (1952), Susie the Little Blue Coupe (1952), Lambert the Sheepish Lion (1952), Kaa the snake in The Jungle Book (1967), and Roquefort in The Aristocats (1970). He is perhaps best remembered as the voice of Winnie the Pooh in Disney's Winnie the Pooh featurettes through 1977. He was honored as a Disney Legend in 1991, the first person to receive the award in the Voice category. His final role was Hobe Carpenter, a friendly moonshiner who helps Harley Thomas (David Carradine) in Thunder and Lightning (1977).
Holloway acted on many radio programs, including The Railroad Hour, The United States Steel Hour, Suspense and Lux Radio Theater. In the late 1940s, he could be heard in various roles on NBC's "Fibber McGee and Molly". His distinctive tenor voice retained a touch of its Southern drawl and was very recognizable. Holloway was chosen to narrate many children's records, including Uncle Remus Stories (Decca), Mother Goose Nursery Rhymes (Disneyland Records), Walt Disney Presents Rudyard Kipling's Just So Stories (Disneyland Records) and Peter And The Wolf (RCA Victor).
Holloway easily made the transition from radio to television. He appeared on the Adventures of Superman as "Uncle Oscar", an eccentric inventor, and played a recurring role on The Life of Riley. He guest-starred on Fred Waring's CBS television program in the 1950s and appeared on Circus Boy as a hot air balloonist. Some other series on which he performed include Five Fingers (episode "The Temple of the Swinging Doll"), The Untouchables, The Real McCoys ("The Jinx"), Hazel, Pete and Gladys, The Twilight Zone ("What's in the Box"), The Brothers Brannagan, Gilligan's Island, The Andy Griffith Show, The Donald O'Connor Show, Peter Gunn, F Troop, and Moonlighting. During the 1970s, Holloway did commercial voice-overs for Purina Puppy Chow dog food and sang their familiar jingle, "Puppy Chow/For a full year/Till he's full-grown!". He also provided the voice for Woodsy Owl in several 1970s and 1980s United States Forest Service commercials. In 1982 he auditioned for the well-known comic book character Garfield but lost to Lorenzo Music. In 1984, he provided voice-over work for a commercial for Libby's baked beans.[8]
Never married, Holloway once claimed this was because he felt lacking in nothing and did not wish to disturb his pattern of life,[7] but he did adopt a son, Richard.
Holloway died on November 22, 1992 of a cardiac arrest in a Los Angeles hospital. His body was cremated and his ashes scattered in the Pacific Ocean.[9]
Voice actor Hal Smith took over the role of Winnie the Pooh for the 1981 short Winnie the Pooh Discovers the Seasons. He would maintain the role until Jim Cummings replaced him in 1988 for The New Adventures of Winnie the Pooh and also took over most of Holloway's other voice roles, including Kaa in Jungle Cubs and The Jungle Book 2.
What is the age limit to be an astronaut?
There are no age restrictions🚨The NASA Astronaut Corps is a unit of the United States National Aeronautics and Space Administration (NASA) that selects, trains, and provides astronauts as crew members for U.S. and international space missions. It is based at Lyndon B. Johnson Space Center in Houston, Texas.
The first U.S. astronaut candidates were selected by NASA in 1959, for its Project Mercury with the objective of orbiting astronauts around the Earth in single-man capsules. The military services were asked to provide a list of military test pilots who met specific qualifications. After stringent screening, NASA announced its selection of the "Mercury Seven" as its first astronauts. Since then, NASA has selected 20 more groups of astronauts, opening the corps to civilians, scientists, doctors, engineers, and school teachers. As of the 2009 Astronaut Class 61% of the astronauts selected by NASA have come from military service.[1]
NASA selects candidates from a diverse pool of applicants with a wide variety of backgrounds. From the thousands of applications received, only a few are chosen for the intensive Astronaut Candidate training program. Including the Original Seven, 339 candidates have been selected to date.[2]
The Astronaut Corps is based at the Lyndon B. Johnson Space Center in Houston, although members may be assigned to other locations based on mission requirements, e.g. Soyuz training at Star City, Russia.
The Chief of the Astronaut Office is the most senior leadership position for active astronauts in the Corps. The Chief Astronaut serves as head of the Corps and is the principal adviser to the NASA Administrator on astronaut training and operations. The first Chief Astronaut was Deke Slayton, appointed in 1962. The current Chief Astronaut is Patrick Forrester.
Salaries for newly hired civilian astronauts are based on the federal government's General Schedule pay scale for grades GS-11 through GS-14. The astronaut's grade is based on his or her academic achievements and experience.[3] Astronauts can be promoted up to grade GS-15.[4] As of 2015, astronauts based at the Johnson Space Center in Houston, Texas, earn between $66,026 (GS-11 step 1) and $158,700 (GS-15 step 8 and above).[5]
Military astronauts are detailed to the Johnson Space Center and remain on active duty for pay, benefits, leave, and similar military matters.
There are no age restrictions for the NASA Astronaut Corps. Astronaut candidates have ranged between the ages of 26 and 46, with the average age being 34. Candidates must be U.S. citizens to apply for the program.
There are three broad categories of qualifications: education, work experience, and medical.[6]
Candidates must have a bachelor's degree from an accredited institution in engineering, biological science, physical science or mathematics. The degree must be followed by at least three years of related, progressively responsible, professional experience (graduate work or studies) or at least 1,000 pilot-in-command time in jet aircraft. An advanced degree is desirable and may be substituted for experience (master's degree = 1 year or a doctoral degree = 3 years). Teaching experience, including experience at the K - 12 levels, is considered to be qualifying experience.
Candidates must have the ability to pass the NASA long-duration space flight physical, which includes the following specific requirements:
As of May 2017[update] the corps has 44 active astronauts[7] and 36 "management astronauts", who are "employed at NASA but are no longer eligible for flight assignment"[8]. The highest number of active astronauts at one time, was in 2000 when there were 149.[9] All of the current astronaut corps are from the classes of 1996 (Group 16) or later.
There are currently 19 "international active astronauts", "who are assigned to duties at the Johnson Space Center"[10], who were selected by their home agency but trained with and serve alongside their NASA counterparts. International astronauts, Payload Specialists, and Spaceflight Participants are not considered members of the NASA Astronaut Corps.
The term "Astronaut Candidate" (informally "ASCAN"[11]) refers to individuals who have been selected by NASA as candidates for the NASA Astronaut Corps and are currently undergoing a candidacy training program at the Johnson Space Center. The most recent class of Astronaut Candidates was selected in 2017 after receiving more than 18,300 applications. Upon completion of a two-year training program, they will be promoted to the rank of Astronaut.[12]
Selection as an Astronaut Candidate and subsequent promotion to Astronaut does not guarantee the individual will eventually fly in space. Some have voluntarily resigned or been medically disqualified after becoming astronauts but before being selected for flights.
Civilian candidates are expected to remain with the Corps for at least five years after initial training; military candidates are assigned for specific tours. After these time limits, members of the Astronaut Corps may resign or retire at any time.
Three members of the Astronaut Corps were killed during a ground test accident while preparing for the Apollo 1 mission. Eleven were killed during spaceflight, on Space Shuttle missions STS-51-L and STS-107.[13] Another four (Elliot See, Charles Bassett, Theodore Freeman, and Clifton Williams) were killed in T-38 plane crashes during training for space flight during the Gemini and Apollo programs. Another was killed in a 1967 automobile accident, and another died in a 1991 commercial airliner crash while traveling on NASA business.
Two members of the Corps have been involuntarily dismissed: Lisa Nowak and William Oefelein. Both were returned to service with the U.S. Navy.
?This article incorporates?public domain material from websites or documents of the National Aeronautics and Space Administration.
When did the civil war in south sudan start?
December 2013🚨
What type of poem is hickory dickory dock?
nursery rhyme🚨"Hickory Dickory Dock" or "Hickety Dickety Dock" is a popular English nursery rhyme. It has a Roud Folk Song Index number of 6489.
The most common modern version is:
Hickory, dickory, dock.
The mouse ran up the clock.
The clock struck one,
The mouse ran down,
Hickory, dickory, dock.[1]
Other variants include "down the mouse ran"[2] or "down the mouse run"[3] or "and down he ran" or "and down he run" in place of "the mouse ran down".
The earliest recorded version of the rhyme is in Tommy Thumb's Pretty Song Book, published in London in about 1744, which uses the opening line: 'Hickere, Dickere Dock'.[1] The next recorded version in Mother Goose's Melody (c. 1765), uses 'Dickery, Dickery Dock'.[1]
The rhyme is thought by some commentators to have originated as a counting-out rhyme.[1] Westmorland shepherds in the nineteenth century used the numbers Hevera (8), Devera (9) and Dick (10).[1]
The rhyme is thought to have been based on the astronomical clock at Exeter Cathedral. The clock has a small hole in the door below the face for the resident cat to hunt mice.[4]
Who was the first mughal ruled on india?
Babur🚨
The Mughal Empire (Persian: ???????????, translit.?Grkniyn[8];
Urdu: ????? ???????, translit.?Mughliyah Saltanat)[9][2] or Mogul Empire[10] was an empire in the Indian subcontinent, founded in 1526. It was established and ruled by the Timurid dynasty with Turco-Mongol Chagatai roots from Central Asia, claiming direct descent from both Genghis Khan (through his son Chagatai Khan) and Timur,[11][12][13] but with significant Indian Rajput and Persian ancestry through marriage alliances;[14][15] only the first two Mughal emperors were fully Central Asian.[16] The dynasty was Indo-Persian in culture,[17] combining Persianate culture[10][18] with local Indian cultural influences[17] visible in its traits and customs.[19]
The beginning of the empire is conventionally dated to the victory by its founder Babur over Ibrahim Lodi, the last ruler of the Delhi Sultanate, in the First Battle of Panipat (1526). During the reign of Humayun, the successor of Babur, the empire was briefly interrupted by the Sur Empire. The "classic period" of the Mughal Empire started in 1556 with the ascension of Akbar the Great to the throne. Some Rajput kingdoms continued to pose a significant threat to the Mughal dominance of northwestern India, but most of them were subdued by Akbar. All Mughal emperors were Muslims; Akbar, however, propounded a syncretic religion in the latter part of his life called Dؐn-i Ilhؐ, as recorded in historical books like Ain-i-Akbari and Dabistn-i Mazhib.[20] The Mughal Empire did not try to intervene in the local societies during most of its existence, but rather balanced and pacified them through new administrative practices[21][22] and diverse and inclusive ruling elites,[23] leading to more systematic, centralised, and uniform rule.[24] Traditional and newly coherent social groups in northern and western India, such as the Marathas, the Rajputs, the Pashtuns, the Hindu Jats and the Sikhs, gained military and governing ambitions during Mughal rule, which, through collaboration or adversity, gave them both recognition and military experience.[25][26][27][28]
Internal dissatisfaction arose due to the weakness of the empire's administrative and economic systems, leading to its break-up and declarations of independence of its former provinces by the Nawab of Bengal, the Nawab of Awadh, the Nizam of Hyderabad and other small states. In 1739, the Mughals were crushingly defeated in the Battle of Karnal by the forces of Nader Shah, the founder of the Afsharid dynasty in Persia, and Delhi was sacked and looted, drastically accelerating their decline. By the mid-18th century, the Marathas had routed Mughal armies and won over several Mughal provinces from the Punjab to Bengal.[29] During the following century Mughal power had become severely limited, and the last emperor, Bahadur Shah II, had authority over only the city of Shahjahanabad. He issued a firman supporting the Indian Rebellion of 1857 and following the defeat was therefore tried by the British East India Company for treason, imprisoned and exiled to Rangoon.[30] The last remnants of the empire were formally taken over by the British, and the Government of India Act 1858 let the British Crown formally assume direct control of India in the form of the new British Raj.
The Mughal Empire at its peak extended over nearly all of the Indian subcontinent[6] and parts of Afghanistan. It was the third largest empire to have existed in the Indian subcontinent (along with the Maurya Empire and the British Indian Empire), spanning approximately four million square kilometres at its zenith,[5] second only to the Maurya Empire. The maximum expansion was reached during the reign of Aurangzeb, who ruled over more than 150 million subjects, nearly one quarter of the world's population at the time.[31] The Mughal Empire also ushered in a period of proto-industrialization,[32] and around the 17th century, Mughal India became the world's largest economic and manufacturing power,[33] producing a quarter of global industrial output up until the 18th century.[34][35] The Mughal Empire is considered "India's last golden age"[36] and one of the three Islamic Gunpowder Empires (along with the Ottoman Empire and Safavid Persia).[37] The reign of Shah Jahan, the fifth emperor, between 1628 and 1658, was the zenith of Mughal architecture with famous monuments such as the Taj Mahal and Moti Masjid at Agra, the Red Fort, the Jama Masjid, Delhi, and the Lahore Fort.
Contemporaries referred to the empire founded by Babur as the Timurid empire,[38] which reflected the heritage of his dynasty, and this was the term preferred by the Mughals themselves.[39]
The Mughal designation for their own dynasty was Gurkani (Persian: ??????????, Grkniyn, meaning "sons-in-law").[8] The use of Mughal derived from the Arabic and Persian corruption of Mongol, and it emphasised the Mongol origins of the Timurid dynasty.[37] The term gained currency during the 19th century, but remains disputed by Indologists.[40] Similar terms had been used to refer to the empire, including "Mogul" and "Moghul".[10][41] Nevertheless, Babur's ancestors were sharply distinguished from the classical Mongols insofar as they were oriented towards Persian rather than Turco-Mongol culture.[42]
Another name for the empire was Hindustan, which was documented in the Ain-i-Akbari, and which has been described as the closest to an official name for the empire.[43] In the west, the term "Mughal" was used for the emperor, and by extension, the empire as a whole.[44]
The Mughal Empire was founded by Babur (reigned 1526ÿ1530), a Central Asian ruler who was descended from the Turco-Mongol conqueror Timur (the founder of the Timurid Empire) on his father's side and from Chagatai, the second son of the Mongol ruler Genghis Khan, on his mother's side.[45] Ousted from his ancestral domains in Central Asia, Babur turned to India to satisfy his ambitions. He established himself in Kabul and then pushed steadily southward into India from Afghanistan through the Khyber Pass.[45] Babur's forces occupied much of northern India after his victory at Panipat in 1526.[45] The preoccupation with wars and military campaigns, however, did not allow the new emperor to consolidate the gains he had made in India.[45]
The instability of the empire became evident under his son, Humayun (reigned 1530ÿ1556), who was driven out of India and into Persia by rebels.[45] The Sur Empire (1540ÿ1555), founded by Sher Shah Suri (reigned 1540ÿ1545), briefly interrupted Mughal rule. Humayun's exile in Persia established diplomatic ties between the Safavid and Mughal Courts, and led to increasing Persian cultural influence in the Mughal Empire. The restoration of Mughal rule began after Humayun's triumphant return from Persia in 1555, but he died from a fatal accident shortly afterwards.[45]
Akbar the Great (reigned 1556ÿ1605) was born Jalal-ud-din Muhammad[46] in the Rajput Umarkot Fort,[47] to Humayun and his wife Hamida Banu Begum, a Persian princess.[48] Akbar succeeded to the throne under a regent, Bairam Khan, who helped consolidate the Mughal Empire in India.[45] Through warfare and diplomacy, Akbar was able to extend the empire in all directions and controlled almost the entire Indian subcontinent north of the Godavari River. He created a new class of nobility loyal to him from the military aristocracy of India's social groups, implemented a modern government, and supported cultural developments.[45] At the same time, Akbar intensified trade with European trading companies. India developed a strong and stable economy, leading to commercial expansion and economic development. Akbar allowed free expression of religion, and attempted to resolve socio-political and cultural differences in his empire by establishing a new religion, Din-i-Ilahi, with strong characteristics of a ruler cult.[45] He left his successors an internally stable state, which was in the midst of its golden age, but before long signs of political weakness would emerge.[45]
Jahangir (born Salim,[49] reigned 1605ÿ1627) was born to Akbar and his wife Mariam-uz-Zamani, an Indian Rajput princess.[50] Jahangir ruled the empire at its peak, but he was addicted to opium, neglected the affairs of the state, and came under the influence of rival court cliques.[45] Shah Jahan (reigned 1628ÿ1658) was born to Jahangir and his wife Jagat Gosaini, a Rajput princess.[49] During the reign of Shah Jahan, the culture and splendour of the luxurious Mughal court reached its zenith as exemplified by the Taj Mahal.[45] The maintenance of the court, at this time, began to cost more than the revenue.[45]
Shah Jahan's eldest son, the liberal Dara Shikoh, became regent in 1658, as a result of his father's illness. However, a younger son, Aurangzeb (reigned 1658ÿ1707), allied with the Islamic orthodoxy against his brother, who championed a syncretistic Hindu-Muslim culture, and ascended to the throne. Aurangzeb defeated Dara in 1659 and had him executed.[45] Although Shah Jahan fully recovered from his illness, Aurangzeb declared him incompetent to rule and had him imprisoned. During Aurangzeb's reign, the empire gained political strength once more.[45] Aurangzeb expanded the empire to include almost the whole of South Asia, but at his death in 1707, many parts of the empire were in open revolt.[45] Aurangzeb is considered India's most controversial king,[51] with some historians arguing his religious conservatism and intolerance undermined the stability of Mughal society,[45] while other historians question this, noting that he built Hindu temples,[52] employed significantly more Hindus in his imperial bureaucracy than his predecessors did, opposed bigotry against Hindus and Shia Muslims,[53] and married Hindu Rajput princess Nawab Bai.[49]
Aurangzeb's son, Shah Alam, repealed the religious policies of his father, and attempted to reform the administration. However, after his death in 1712, the Mughal dynasty sank into chaos and violent feuds. In 1719 alone, four emperors successively ascended the throne.[45]
During the reign of Muhammad Shah (reigned 1719ÿ1748), the empire began to break up, and vast tracts of central India passed from Mughal to Maratha hands. The far-off Indian campaign of Nadir Shah, who had priorly reestablished Iranian suzerainty over most of West Asia, the Caucasus, and Central Asia, culminated with the Sack of Delhi and shattered the remnants of Mughal power and prestige.[45] Many of the empire's elites now sought to control their own affairs, and broke away to form independent kingdoms.[45] But, according to Sugata Bose and Ayesha Jalal, the Mughal Emperor continued to be the highest manifestation of sovereignty. Not only the Muslim gentry, but the Maratha, Hindu, and Sikh leaders took part in ceremonial acknowledgements of the emperor as the sovereign of India.[54]
The Mughal Emperor Shah Alam II (1759ÿ1806) made futile attempts to reverse the Mughal decline but ultimately had to seek the protection of the Emir of Afghanistan, Ahmed Shah Abdali, which led to the Third Battle of Panipat between the Maratha Empire and the Afghans led by Abdali in 1761. In 1771, the Marathas recaptured Delhi from Afghan control and in 1784 they officially became the protectors of the emperor in Delhi,[55] a state of affairs that continued further until after the Third Anglo-Maratha War. Thereafter, the British East India Company became the protectors of the Mughal dynasty in Delhi.[54] The British East India Company took control of the former Mughal province of Bengal-Bihar in 1793 after it abolished local rule (Nizamat) that lasted until 1858, marking the beginning of British colonial era over the Indian Subcontinent. By 1857 a considerable part of former Mughal India was under the East India Company's control. After a crushing defeat in the war of 1857ÿ1858 which he nominally led, the last Mughal, Bahadur Shah Zafar, was deposed by the British East India Company and exiled in 1858. Through the Government of India Act 1858 the British Crown assumed direct control of East India Company-held territories in India in the form of the new British Raj. In 1876 the British Queen Victoria assumed the title of Empress of India.
Historians have offered numerous explanations for the rapid collapse of the Mughal Empire between 1707 and 1720, after a century of growth and prosperity. In fiscal terms the throne lost the revenues needed to pay its chief officers, the emirs (nobles) and their entourages. The emperor lost authority, as the widely scattered imperial officers lost confidence in the central authorities, and made their own deals with local men of influence. The imperial army, bogged down in long, futile wars against the more aggressive Marathas lost its fighting spirit. Finally came a series of violent political feuds over control of the throne. After the execution of emperor Farrukhsiyar in 1719, local Mughal successor states took power in region after region.[56]
Contemporary chroniclers bewailed the decay they witnessed, a theme picked up by the first British historians who wanted to underscore the need for a British-led rejuvenation.[57]
Since the 1970s historians have taken multiple approaches to the decline, with little consensus on which factor was dominant. The psychological interpretations emphasise depravity in high places, excessive luxury, and increasingly narrow views that left the rulers unprepared for an external challenge. A Marxist school (led by Irfan Habib and based at Aligarh Muslim University) emphasises excessive exploitation of the peasantry by the rich, which stripped away the will and the means to support the regime.[58] Karen Leonard has focused on the failure of the regime to work with Hindu bankers, whose financial support was increasingly needed; the bankers then helped the Maratha and the British.[59] In a religious interpretation, some scholars argue that the Hindu powers revolted against the rule of a Muslim dynasty.[60] Finally, other scholars argue that the very prosperity of the Empire inspired the provinces to achieve a high degree of independence, thus weakening the imperial court.[61]
Jeffrey G. Williamson has argued that the Indian economy went through deindustrialization in the latter half of the 18th century as an indirect outcome of the collapse of the Mughal Empire, with British rule later causing further deindustrialization.[62] According to Williamson, the decline of the Mughal Empire led to a decline in agricultural productivity, which drove up food prices, then nominal wages, and then textile prices, which led to India losing a share of the world textile market to Britain even before it had superior factory technology.[63] Indian textiles, however, still maintained a competitive advantage over British textiles up until the 19th century.[64]
Subah (Urdu: ????) was the term for a province in the Mughal Empire. The word is derived from Arabic. The governor of a Subah was known as a subahdar (sometimes also referred to as a "Subah"[65]), which later became subedar to refer to an officer in the Indian Army. The subahs were established by padshah (emperor) Akbar during his administrative reforms of 1572ÿ1580; initially they numbered 12, but his conquests expanded the number of subahs to 15 by the end of his reign. Subahs were divided into Sarkars, or districts. Sarkars were further divided into Parganas or Mahals. His successors, most notably Aurangzeb, expanded the number of subahs further through their conquests. As the empire began to dissolve in the early 18th century, many subahs became effectively independent, or were conquered by the Marathas or the British.
The original twelve subahs created as a result of administrative reform by Akbar:
The Indian economy was large and prosperous under the Mughal Empire.[66] During the Mughal era, the gross domestic product (GDP) of India in 1600 was estimated at about 22.4% of the world economy, the second largest in the world, behind only Ming China but larger than Europe. By 1700, the GDP of Mughal India had risen to 24.4% of the world economy, the largest in the world, larger than both Qing China and Western Europe.[67] Mughal India was the world leader in manufacturing,[33] producing about 25% of the world's industrial output up until the 18th century.[35] India's GDP growth increased under the Mughal Empire, with India's GDP having a faster growth rate during the Mughal era than in the 1,500 years prior to the Mughal era.[67] Mughal India's economy has been described as a form of proto-industrialization, like that of 18th-century Western Europe prior to the Industrial Revolution.[32]
The Mughals were responsible for building an extensive road system, creating a uniform currency, and the unification of the country.[68] The empire had an extensive road network, which was vital to the economic infrastructure, built by a public works department set up by the Mughals which designed, constructed and maintained roads linking towns and cities across the empire, making trade easier to conduct.[66]
The Mughals adopted and standardized the rupee (rupiya, or silver) and dam (copper) currencies introduced by Sur Emperor Sher Shah Suri during his brief rule.[69] The currency was initially 48 dams to a single rupee in the beginning of Akbar's reign, before it later became 38 dams to a rupee in the 1580s, with the dam's value rising further in the 17th century as a result of new industrial uses for copper, such as in bronze cannons and brass utensils. The dam was initially the most common coin in Akbar's time, before being replaced by the rupee as the most common coin in succeeding reigns.[7] The dam's value was later worth 30 to a rupee towards the end of Jahangir's reign, and then 16 to a rupee by the 1660s.[70] The Mughals minted coins with high purity, never dropping below 96%, and without debasement until the 1720s.[71]
Despite India having its own stocks of gold and silver, the Mughals produced minimal gold of their own, but mostly minted coins from imported bullion, as a result of the empire's strong export-driven economy, with global demand for Indian agricultural and industrial products drawing a steady stream of precious metals into India.[7] Around 80% of Mughal India's imports were bullion, mostly silver,[72] with major sources of imported bullion including the New World and Japan,[71] which in turn imported large quantities of textiles and silk from the Bengal Subah province.[73]
The Mughal Empire's workforce in the early 17th century consisted of about 64% in the primary sector (including agriculture) and 36% in the secondary and tertiary sectors, including over 11% in the secondary sector (manufacturing) and about 25% in the tertiary sector (service).[74] Mughal India's workforce had a higher percentage in the non-primary sector than Europe's workforce did at the time; agriculture accounted for 65ÿ90% of Europe's workforce in 1700, and 65ÿ75% in 1750, including 65% of England's workforce in 1750.[75] In terms of contributions to the Mughal economy, in the late 16th century, the primary sector contributed 52.4%, the secondary sector 18.2% and the tertiary sector 29.4%; the secondary sector contributed a higher percentage than in early 20th-century British India, where the secondary sector only contributed 11.2% to the economy.[76] In terms of urban-rural divide, 18% of Mughal India's labour force were urban and 82% were rural, contributing 52% and 48% to the economy, respectively.[77]
Real wages and living standards in 18th-century Mughal Bengal and South India were higher than in Britain, which in turn had the highest living standards in Europe.[78][62] According to economic historian Paul Bairoch, India as well as China had a higher GNP per capita than Europe up until the late 18th century,[79][80] before Western European per-capita income pulled ahead after 1800.[81] Mughal India also had a higher per-capita income in the late 16th century than British India did in the early 20th century.[76] However, in a system where wealth was hoarded by elites, wages were depressed for manual labour,[82] though no less than labour wages in Europe at the time.[78] In Mughal India, there was a generally tolerant attitude towards manual labourers, with some religious cults in northern India proudly asserting a high status for manual labour. While slavery also existed, it was limited largely to household servants.[82]
Indian agricultural production increased under the Mughal Empire.[66] A variety of crops were grown, including food crops such as wheat, rice, and barley, and non-food cash crops such as cotton, indigo and opium. By the mid-17th century, Indian cultivators begun to extensively grow two new crops from the Americas, maize and tobacco.[66]
The Mughal administration emphasized agrarian reform, which began under the non-Mughal emperor Sher Shah Suri, the work of which Akbar adopted and furthered with more reforms. The civil administration was organized in a hierarchical manner on the basis of merit, with promotions based on performance.[3] The Mughal government funded the building of irrigation systems across the empire, which produced much higher crop yields and increased the net revenue base, leading to increased agricultural production.[66]
A major Mughal reform introduced by Akbar was a new land revenue system called zabt. He replaced the tribute system, previously common in India and used by Tokugawa Japan at the time, with a monetary tax system based on a uniform currency.[83] The revenue system was biased in favour of higher value cash crops such as cotton, indigo, sugar cane, tree-crops, and opium, providing state incentives to grow cash crops, in addition to rising market demand.[84] Under the zabt system, the Mughals also conducted extensive cadastral surveying to assess the area of land under plow cultivation, with the Mughal state encouraging greater land cultivation by offering tax-free periods to those who brought new land under cultivation.[85]
Mughal agriculture was in some ways advanced compared to European agriculture at the time, exemplified by the common use of the seed drill among Indian peasants before its adoption in Europe.[86] While the average peasant across the world was only skilled in growing very few crops, the average Indian peasant was skilled in growing a wide variety of food and non-food crops, increasing their productivity.[87] Indian peasants were also quick to adapt to profitable new crops, such as maize and tobacco from the New World being rapidly adopted and widely cultivated across Mughal India between 1600 and 1650. Bengali farmers rapidly learned techniques of mulberry cultivation and sericulture, establishing Bengal Subah as a major silk-producing region of the world.[84] Sugar mills appeared in India shortly before the Mughal era. Evidence for the use of a draw bar for sugar-milling appears at Delhi in 1540, but may also date back earlier, and was mainly used in the northern Indian subcontinent. Geared sugar rolling mills first appeared in Mughal India, using the principle of rollers as well as worm gearing, by the 17th century.[88]
According to evidence cited by the economic historians Immanuel Wallerstein, Irfan Habib, Percival Spear, and Ashok Desai, per-capita agricultural output and standards of consumption in 17th-century Mughal India were higher than in 17th-century Europe and early 20th-century British India.[89] The increased agricultural productivity led to lower food prices. In turn, this benefited the Indian textile industry. Compared to Britain, the price of grain was about one-half in South India and one-third in Bengal, in terms of silver coinage. This resulted in lower silver coin prices for Indian textiles, giving them a price advantage in global markets.[78]
Up until the 18th century, Mughal India was the most important center of manufacturing in international trade.[33] Up until 1750, India produced about 25% of the world's industrial output.[62] Manufactured goods and cash crops from the Mughal Empire were sold throughout the world. Key industries included textiles, shipbuilding, and steel. Processed products included cotton textiles, yarns, thread, silk, jute products, metalware, and foods such as sugar, oils and butter.[66] The growth of manufacturing industries in the Indian subcontinent during the Mughal era in the 17thÿ18th centuries has been referred to as a form of proto-industrialization, similar to 18th-century Western Europe prior to the Industrial Revolution.[32]
In early modern Europe, there was significant demand for products from Mughal India, particularly cotton textiles, as well as goods such as spices, peppers, indigo, silks, and saltpeter (for use in munitions).[66] European fashion, for example, became increasingly dependent on Mughal Indian textiles and silks. From the late 17th century to the early 18th century, Mughal India accounted for 95% of British imports from Asia, and the Bengal Subah province alone accounted for 40% of Dutch imports from Asia.[90] In contrast, there was very little demand for European goods in Mughal India, which was largely self-sufficient, thus Europeans had very little to offer, except for some woolens, unprocessed metals and a few luxury items. The trade imbalance caused Europeans to export large quantities of gold and silver to Mughal India in order to pay for South Asian imports.[66] Indian goods, especially those from Bengal, were also exported in large quantities to other Asian markets, such as Indonesia and Japan.[73]
The largest manufacturing industry in the Mughal Empire was textile manufacturing, particularly cotton textile manufacturing, which included the production of piece goods, calicos, and muslins, available unbleached and in a variety of colours. The cotton textile industry was responsible for a large part of the empire's international trade.[66] India had a 25% share of the global textile trade in the early 18th century.[91] Indian cotton textiles were the most important manufactured goods in world trade in the 18th century, consumed across the world from the Americas to Japan.[33] By the early 18th century, Mughal Indian textiles were clothing people across the Indian subcontinent, Southeast Asia, Europe, the Americas, Africa, and the Middle East.[63] The most important center of cotton production was the Bengal province, particularly around its capital city of Dhaka.[92]
Bengal accounted for more than 50% of textiles and around 80% of silks imported by the Dutch from Asia,[90] Bengali silk and cotton textiles were exported in large quantities to Europe, Indonesia, and Japan,[73] and Bengali muslin textiles from Dhaka were sold in Central Asia, where they were known as "daka" textiles.[92] Indian textiles dominated the Indian Ocean trade for centuries, were sold in the Atlantic Ocean trade, and had a 38% share of the West African trade in the early 18th century, while Indian calicos were a major force in Europe, and Indian textiles accounted for 20% of total English trade with Southern Europe in the early 18th century.[62]
The worm gear roller cotton gin, which was invented in India during the early Delhi Sultanate era of the 13thÿ14th centuries, came into use in the Mughal Empire some time around the 16th century,[88] and is still used in India through to the present day.[93] Another innovation, the incorporation of the crank handle in the cotton gin, first appeared in India some time during the late Delhi Sultanate or the early Mughal Empire.[94] The production of cotton, which may have largely been spun in the villages and then taken to towns in the form of yarn to be woven into cloth textiles, was advanced by the diffusion of the spinning wheel across India shortly before the Mughal era, lowering the costs of yarn and helping to increase demand for cotton. The diffusion of the spinning wheel, and the incorporation of the worm gear and crank handle into the roller cotton gin, led to greatly expanded Indian cotton textile production during the Mughal era.[95]
It was reported that, with an Indian cotton gin, which is half machine and half tool, one man and one woman could clean 28 pounds of cotton per day. With a modified Forbes version, one man and a boy could produce 250 pounds per day. If oxen were used to power 16 of these machines, and a few people's labour was used to feed them, they could produce as much work as 750 people did formerly.[96]
Mughal India had a large shipbuilding industry, which was also largely centered in the Bengal province. In terms of shipbuilding tonnage during the 16thÿ18th centuries, the annual output of Bengal alone totaled around 2,232,500 tons, larger than the combined output of the Dutch (450,000ÿ550,000 tons), the British (340,000 tons), and North America (23,061 tons).[97]
The Mughals maintained a small fleet for carrying pilgrims to Mecca, and imported Arabian horses in Surat. Debal in Sindh was mostly autonomous. The Mughals also maintained various river fleets of Dhows, which transported soldiers over rivers and fought rebels. Among its admirals were Yahya Saleh, Munnawar Khan, and Muhammad Saleh Kamboh. The Mughals also protected the Siddis of Janjira. Its sailors were renowned and often voyaged to China and the East African Swahili Coast, together with some Mughal subjects carrying out private-sector trade.
Indian shipbuilding, particularly in Bengal, was advanced compared to European shipbuilding at the time, with Indians selling ships to European firms. Ship-repairing, for example, was very advanced in Bengal, where European shippers visited to repair vessels.[97] An important innovation in shipbuilding was the introduction of a flushed deck design in Bengal rice ships, resulting in hulls that were stronger and less prone to leak than the structurally weak hulls of traditional European ships built with a stepped deck design. The British East India Company later duplicated the flushed deck and hull designs of Bengal rice ships in the 1760s, leading to significant improvements in seaworthiness and navigation for European ships during the Industrial Revolution.[98]
The Bengal Subah province was especially prosperous from the time of its takeover by the Mughals in 1590 until the British East India Company seized control in 1757.[99] It was the Mughal Empire's wealthiest province,[100] and the economic powerhouse of the Mughal Empire, generating 50% of the empire's GDP.[101] Domestically, much of India depended on Bengali products such as rice, silks and cotton textiles. Overseas, Europeans depended on Bengali products such as cotton textiles, silks and opium; Bengal accounted for 40% of Dutch imports from Asia, for example, including more than 50% of textiles and around 80% of silks.[90] From Bengal, saltpeter was also shipped to Europe, opium was sold in Indonesia, raw silk was exported to Japan and the Netherlands, and cotton and silk textiles were exported to Europe, Indonesia and Japan.[73]
Bengal was described as the Paradise of Nations by Mughal emperors.[102] The Mughals introduced agrarian reforms, including the modern Bengali calendar.[103] The calendar played a vital role in developing and organising harvests, tax collection and Bengali culture in general, including the New Year and Autumn festivals. The province was a leading producer of grains, salt, pearls, fruits, liquors and wines, precious metals and ornaments.[104] Its handloom industry flourished under royal warrants, making the region a hub of the worldwide muslin trade, which peaked in the 17th and 18th centuries. The provincial capital Dhaka became the commercial capital of the empire. The Mughals expanded cultivated land in the Bengal delta under the leadership of Sufis, which consolidated the foundation of Bengali Muslim society.[105]
After 150 years of rule by Mughal viceroys, Bengal gained semi-independence as a dominion under the Nawab of Bengal in 1717. The Nawabs permitted European companies to set up trading posts across the region, including firms from Britain, France, the Netherlands, Denmark, Portugal and Austria-Hungary. An Armenian community dominated banking and shipping in major cities and towns. The Europeans regarded Bengal as the richest place for trade.[104] By the late 18th century, the British displaced the Mughal ruling class in Bengal.
India's population growth accelerated under the Mughal Empire, with an unprecedented economic and demographic upsurge which boosted the Indian population by 60%[106] to 253% in 200 years during 1500ÿ1700.[107] The Indian population had a faster growth during the Mughal era than at any known point in Indian history prior to the Mughal era.[106][67] The increased population growth rate was stimulated by Mughal agrarian reforms that intensified agricultural production.[84] By the time of Aurangzeb's reign, there were a total of 455,698 villages in the Mughal Empire.[108]
The following table gives population estimates for the Mughal Empire, compared to the total population of India, including the regions of modern Pakistan and Bangladesh, and compared to the world population:
Cities and towns boomed under the Mughal Empire, which had a relatively high degree of urbanization for its time, with 15% of its population living in urban centres.[36] This was higher than the percentage of the urban population in contemporary Europe at the time and higher than that of British India in the 19th century;[36] the level of urbanization in Europe did not reach 15% until the 19th century.[110]
Under Akbar's reign in 1600, the Mughal Empire's urban population was up to 17 million people, 15% of the empire's total population. This was larger the entire urban population in Europe at the time, and even a century later in 1700, the urban population of England, Scotland and Wales did not exceed 13% of its total population,[108] while British India had an urban population that was under 13% of its total population in 1800 and 9.3% in 1881, a decline from the earlier Mughal era.[111] By 1700, Mughal India had an urban population of 23 million people, larger than British India's urban population of 22.3 million in 1871.[112]
The historian Nizamuddin Ahmad (1551ÿ1621) reported that, under Akbar's reign, there were 120 large cities and 3200 townships.[36] A number of cities in India had a population between a quarter-million and half-million people,[36] with larger cities including Agra (in Agra Subah) with up to 800,000 people, Lahore (in Lahore Subah) with up to 700,000 people,[113] Dhaka (in Bengal Subah) with over 1 million people,[114] and Delhi (in Delhi Subah) with over 600,000 people.[115]
Cities acted as markets for the sale of goods, and provided homes for a variety of merchants, traders, shopkeepers, artisans, moneylenders, weavers, craftspeople, officials, and religious figures.[66] However, a number of cities were military and political centres, rather than manufacturing or commerce centres.[116]
Mughal influence can be seen in cultural contributions such as:
The Mughals built Maktab schools in every province under their authority, where youth were taught the Quran and Islamic law such as the Fatawa-i-Alamgiri in their indigenous languages.
The Mughals made a major contribution to the Indian subcontinent with development of their unique architecture. Many monuments were built during the Mughal era by the Muslim emperors, especially Shah Jahan, including the Taj Mahal, a UNESCO World Heritage Site known to be one of the finer examples of Mughal architecture. Other World Heritage Sites include Humayun's Tomb, Fatehpur Sikri, the Red Fort, the Agra Fort, and the Lahore Fort.
The palaces, tombs, and forts built by the dynasty stand today in Agra, Aurangabad, Delhi, Dhaka, Fatehpur Sikri, Jaipur, Lahore, Kabul, Sheikhupura, and many other cities of India, Pakistan, Afghanistan, and Bangladesh.[122] With few memories of Central Asia, Babur's descendants absorbed traits and customs of South Asia[19] and became more or less naturalized.
Although the land the Mughals once ruled has separated into what is now India, Pakistan, Bangladesh, and Afghanistan, their influence can still be seen widely today. Tombs of the emperors are spread throughout India, Afghanistan,[123] and Pakistan.
The Mughal artistic tradition was eclectic, borrowing from the European Renaissance as well as from Persian and Indian sources. Kumar concludes, "The Mughal painters borrowed individual motifs and certain naturalistic effects from Renaissance and Mannerist painting, but their structuring principle was derived from Indian and Persian traditions."[124]
Although Persian was the dominant and "official" language of the empire, the language of the elite was a Persianised form of Hindustani called Urdu. The language was written in a type of Perso-Arabic script known as Nastaliq, and with literary conventions and specialised vocabulary borrowed from Persian, Arabic and Turkic; the dialect was eventually given its own name of Urdu.[125] Modern Hindi, which uses Sanskrit-based vocabulary along with Perso-Arabic loan words is mutually intelligible with Urdu.[125]
Mughal India was one of the three Islamic Gunpowder Empires, along with the Ottoman Empire and Safavid Persia.[37][126][127] By the time he was invited by Lodi governor of Lahore, Daulat Khan, to support his rebellion against Lodi Sultan Ibrahim Khan, Babur was familiar with gunpowder firearms and field artillery, and a method for deploying them. Babur had employed Ottoman expert Ustad Ali Quli, who showed Babur the standard Ottoman formationartillery and firearm-equipped infantry protected by wagons in the center and the mounted archers on both wings. Babur used this formation at the First Battle of Panipat in 1526, where the Afghan and Rajput forces loyal to the Delhi Sultanate, though superior in numbers but without the gunpowder weapons, were defeated. The decisive victory of the Timurid forces is one reason opponents rarely met Mughal princes in pitched battle over the course of the empire's history.[128] In India, guns made of bronze were recovered from Calicut (1504) and Diu (1533).[129]
Fathullah Shirazi (c. 1582), a Persian polymath and mechanical engineer who worked for Akbar, developed an early multi gun shot. As opposed to the polybolos and repeating crossbows used earlier in ancient Greece and China, respectively, Shirazi's rapid-firing gun had multiple gun barrels that fired hand cannons loaded with gunpowder. It may be considered a version of a volley gun.[130]
By the 17th century, Indians were manufacturing a diverse variety of firearms; large guns in particular, became visible in Tanjore, Dacca, Bijapur and Murshidabad.[131] Gujart supplied Europe saltpeter for use in gunpowder warfare during the 17th century,[132] and Mughal Bengal and Mlwa also participated in saltpeter production.[132] The Dutch, French, Portuguese and English used Chpra as a center of saltpeter refining.[133]
In the 16th century, Akbar was the first to initiate and use metal cylinder rockets known as bans, particularly against war elephants, during the Battle of Sanbal.[134] In 1657, the Mughal Army used rockets during the Siege of Bidar.[135] Prince Aurangzeb's forces discharged rockets and grenades while scaling the walls. Sidi Marjan was mortally wounded when a rocket struck his large gunpowder depot, and after twenty-seven days of hard fighting Bidar was captured by the victorious Mughals.[135]
In A History of Greek Fire and Gunpowder, James Riddick Partington described Indian rockets and explosive mines:[129]
The Indian war rockets were formidable weapons before such rockets were used in Europe. They had bam-boo rods, a rocket-body lashed to the rod, and iron points. They were directed at the target and fired by lighting the fuse, but the trajectory was rather erratic. The use of mines and counter-mines with explosive charges of gunpowder is mentioned for the times of Akbar and Jahngir.
Later, the Mysorean rockets were upgraded versions of Mughal rockets used during the Siege of Jinji by the progeny of the Nawab of Arcot. Hyder Ali's father Fatah Muhammad the constable at Budikote, commanded a corps consisting of 50 rocketmen (Cushoon) for the Nawab of Arcot. Hyder Ali realised the importance of rockets and introduced advanced versions of metal cylinder rockets. These rockets turned fortunes in favour of the Sultanate of Mysore during the Second Anglo-Mysore War, particularly during the Battle of Pollilur. In turn, the Mysorean rockets were the basis for the Congreve rockets, which Britain deployed in the Napoleonic Wars against France and the War of 1812 against the United States.[136]
While there appears to have been little concern for theoretical astronomy, Mughal astronomers made advances in observational astronomy and produced nearly a hundred Zij treatises. Humayun built a personal observatory near Delhi; Jahangir and Shah Jahan were also intending to build observatories, but were unable to do so. The astronomical instruments and observational techniques used at the Mughal observatories were mainly derived from Islamic astronomy.[137][138] In the 17th century, the Mughal Empire saw a synthesis between Islamic and Hindu astronomy, where Islamic observational instruments were combined with Hindu computational techniques.[137][138]
During the decline of the Mughal Empire, the Hindu king Jai Singh II of Amber continued the work of Mughal astronomy. In the early 18th century, he built several large observatories called Yantra Mandirs, in order to rival Ulugh Beg's Samarkand observatory, and in order to improve on the earlier Hindu computations in the Siddhantas and Islamic observations in Zij-i-Sultani. The instruments he used were influenced by Islamic astronomy, while the computational techniques were derived from Hindu astronomy.[137][138]
Sake Dean Mahomed had learned much of Mughal chemistry and understood the techniques used to produce various alkali and soaps to produce shampoo. He was also a notable writer who described the Mughal Emperor Shah Alam II and the cities of Allahabad and Delhi in rich detail and also made note of the glories of the Mughal Empire.
In Britain, Sake Dean Mahomed was appointed as shampooing surgeon to both Kings George IV and William IV.[139]
One of the most remarkable astronomical instruments invented in Mughal India is the seamless celestial globe. It was invented in Kashmir by Ali Kashmiri ibn Luqman in 998 AH (1589ÿ90 CE), and twenty other such globes were later produced in Lahore and Kashmir during the Mughal Empire. Before they were rediscovered in the 1980s, it was believed by modern metallurgists to be technically impossible to produce metal globes without any seams.[140]
Which two forms of energy do muscles produce?
triglyceride🚨Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in humans,[2] animals,[3] fungi, and bacteria. The polysaccharide structure represents the main storage form of glucose in the body.
Glycogen functions as one of two forms of long-term energy reserves, with the other form being triglyceride stores in adipose tissue (i.e., body fat). In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle.[2][4] In the liver, glycogen can make up from 5ÿ6% of the organ's fresh weight and the liver of an adult weighing 70?kg can store roughly 100ÿ120?grams of glycogen.[2][5] In skeletal muscle, glycogen is found in a low concentration (1ÿ2% of the muscle mass) and the skeletal muscle of an adult weighing 70?kg can store roughly 400?grams of glycogen.[2] The amount of glycogen stored in the bodyparticularly within the muscles and livermostly depends on physical training, basal metabolic rate, and eating habits. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cells,[6][7][8] white blood cells,[medical citation needed] and glial cells in the brain.[9] The uterus also stores glycogen during pregnancy to nourish the embryo.[10]
Approximately 4?grams of glucose are present in the blood of humans at all times;[2] in fasted individuals, blood glucose is maintained constant at this level at the expense of glycogen stores in the liver and skeletal muscle.[2] Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself;[2] however, the breakdown of muscle glycogen impedes muscle glucose uptake, thereby increasing the amount of blood glucose available for use in other tissues.[2] Liver glycogen stores serve as a store of glucose for use throughout the body, particularly the central nervous system.[2] The human brain consumes approximately 60% of blood glucose in fasted, sedentary individuals.[2]
Glycogen is the analogue of starch, a glucose polymer that functions as energy storage in plants. It has a structure similar to amylopectin (a component of starch), but is more extensively branched and compact than starch. Both are white powders in their dry state. Glycogen is found in the form of granules in the cytosol/cytoplasm in many cell types, and plays an important role in the glucose cycle. Glycogen forms an energy reserve that can be quickly mobilized to meet a sudden need for glucose, but one that is less compact than the energy reserves of triglycerides (lipids).
Glycogen is a branched biopolymer consisting of linear chains of glucose residues with an average chain length of approximately 8ÿ12 glucose units[11]. Glucose units are linked together linearly by ϫ(1L4) glycosidic bonds from one glucose to the next. Branches are linked to the chains from which they are branching off by ϫ(1L6) glycosidic bonds between the first glucose of the new branch and a glucose on the stem chain.[12]
Due to the way glycogen is synthesised, every glycogen granule has at its core a glycogenin protein.[13]
Glycogen in muscle, liver, and fat cells is stored in a hydrated form, composed of three or four parts of water per part of glycogen associated with 0.45?millimoles of potassium per gram of glycogen.[4]
As a meal containing carbohydrates or protein is eaten and digested, blood glucose levels rise, and the pancreas secretes insulin. Blood glucose from the portal vein enters liver cells (hepatocytes). Insulin acts on the hepatocytes to stimulate the action of several enzymes, including glycogen synthase. Glucose molecules are added to the chains of glycogen as long as both insulin and glucose remain plentiful. In this postprandial or "fed" state, the liver takes in more glucose from the blood than it releases.
After a meal has been digested and glucose levels begin to fall, insulin secretion is reduced, and glycogen synthesis stops. When it is needed for energy, glycogen is broken down and converted again to glucose. Glycogen phosphorylase is the primary enzyme of glycogen breakdown. For the next 8ÿ12 hours, glucose derived from liver glycogen is the primary source of blood glucose used by the rest of the body for fuel.
Glucagon, another hormone produced by the pancreas, in many respects serves as a countersignal to insulin. In response to insulin levels being below normal (when blood levels of glucose begin to fall below the normal range), glucagon is secreted in increasing amounts and stimulates both glycogenolysis (the breakdown of glycogen) and gluconeogenesis (the production of glucose from other sources).
Muscle cell glycogen appears to function as an immediate reserve source of available glucose for muscle cells. Other cells that contain small amounts use it locally, as well. As muscle cells lack glucose-6-phosphatase, which is required to pass glucose into the blood, the glycogen they store is available solely for internal use and is not shared with other cells. This is in contrast to liver cells, which, on demand, readily do break down their stored glycogen into glucose and send it through the blood stream as fuel for other organs.[citation needed]
Glycogen was discovered by Claude Bernard. His experiments showed that the liver contained a substance that could give rise to reducing sugar by the action of a "ferment" in the liver. By 1857, he described the isolation of a substance he called "la matire glycogne", or "sugar-forming substance". Soon after the discovery of glycogen in the liver, A. Sanson found that muscular tissue also contains glycogen. The empirical formula for glycogen of (C
6H
10O
5)n was established by Kekule in 1858.[14]
Glycogen synthesis is, unlike its breakdown, endergonicit requires the input of energy. Energy for glycogen synthesis comes from uridine triphosphate (UTP), which reacts with glucose-1-phosphate, forming UDP-glucose, in a reaction catalysed by UTPglucose-1-phosphate uridylyltransferase. Glycogen is synthesized from monomers of UDP-glucose initially by the protein glycogenin, which has two tyrosine anchors for the reducing end of glycogen, since glycogenin is a homodimer. After about eight glucose molecules have been added to a tyrosine residue, the enzyme glycogen synthase progressively lengthens the glycogen chain using UDP-glucose, adding ϫ(1L4)-bonded glucose. The glycogen branching enzyme catalyzes the transfer of a terminal fragment of six or seven glucose residues from a nonreducing end to the C-6 hydroxyl group of a glucose residue deeper into the interior of the glycogen molecule. The branching enzyme can act upon only a branch having at least 11 residues, and the enzyme may transfer to the same glucose chain or adjacent glucose chains.
Glycogen is cleaved from the nonreducing ends of the chain by the enzyme glycogen phosphorylase to produce monomers of glucose-1-phosphate:
In vivo, phosphorolysis proceeds in the direction of glycogen breakdown because the ratio of phosphate and glucose-1-phosphate is usually greater than 100.[15] Glucose-1-phosphate is then converted to glucose 6-phosphate (G6P) by phosphoglucomutase. A special debranching enzyme is needed to remove the ϫ(1-6) branches in branched glycogen and reshape the chain into a linear polymer. The G6P monomers produced have three possible fates:
The most common disease in which glycogen metabolism becomes abnormal is diabetes, in which, because of abnormal amounts of insulin, liver glycogen can be abnormally accumulated or depleted. Restoration of normal glucose metabolism usually normalizes glycogen metabolism, as well.
In hypoglycemia caused by excessive insulin, liver glycogen levels are high, but the high insulin levels prevent the glycogenolysis necessary to maintain normal blood sugar levels. Glucagon is a common treatment for this type of hypoglycemia.
Various inborn errors of metabolism are caused by deficiencies of enzymes necessary for glycogen synthesis or breakdown. These are collectively referred to as glycogen storage diseases.
Long-distance athletes, such as marathon runners, cross-country skiers, and cyclists, often experience glycogen depletion, where almost all of the athlete's glycogen stores are depleted after long periods of exertion without sufficient carbohydrate consumption. This phenomenon is referred to as "hitting the wall".
Glycogen depletion can be forestalled in three possible ways. First, during exercise, carbohydrates with the highest possible rate of conversion to blood glucose (high glycemic index) are ingested continuously. The best possible outcome of this strategy replaces about 35% of glucose consumed at heart rates above about 80% of maximum. Second, through endurance training adaptations and specialized regimens (e.g. fasting low-intensity endurance training), the body can condition type I muscle fibers to improve both fuel use efficiency and workload capacity to increase the percentage of fatty acids used as fuel,[16][17][citation needed] sparing carbohydrate use from all sources. Third, by consuming large quantities of carbohydrates after depleting glycogen stores as a result of exercise or diet, the body can increase storage capacity of intramuscular glycogen stores.[18][19][20] This process is known as carbohydrate loading. In general, glycemic index of carbohydrate source does not matter since muscular insulin sensitivity is increased as a result of temporary glycogen depletion.[21][22]
When experiencing glycogen debt, athletes often experience extreme fatigue to the point that it is difficult to move. As a reference, the very best professional cyclists in the world will usually finish a 4- to 5-hr stage race right at the limit of glycogen depletion using the first three strategies.
When athletes ingest both carbohydrate and caffeine following exhaustive exercise, their glycogen stores tend to be replenished more rapidly;[23][24][25] however, the minimum dose of caffeine at which there is a clinically significant effect on glycogen repletion has not been established.[25]
What we call the members of rajya sabha?
Member of Parliament🚨
Executive:
Legislature:
Judiciary:
Political parties
National coalitions:
State governments
Legislatures:
Local governments:
Rural bodies:
Urban bodies:
A Member of Parliament of the Rajya Sabha (abbreviated: MP) is the representative of the Indian states to the upper house of the Parliament of India (Rajya Sabha). Rajya Sabha MPs are elected by the electoral college of the elected members of the State Assembly with a system of proportional representation by a single transferable vote. Parliament of India is bicameral with two houses; Rajya Sabha (Upper house i.e. Council of States) and the Lok Sabha (Lower house i.e. House of the People). Total number of members of Rajya Sabha are lesser than Member of Parliament of the Lok Sabha and have more restricted power than the lower house (Lok Sabha).[1] Unlike membership to the Lok Sabha, membership to the Rajya Sabha is permanent for a term of six years cannot be dissolved at any time.[2]
Broad responsibilities of the Members of Parliament of Rajya Sabha are;
Rajya Sabha enjoys certain special powers which in effect gives special powers and responsibilities to the Rajya Sabha MPs. The special powers are;
Unlike membership to the Lok Sabha, membership to the Rajya Sabha is permanent for a term of six years cannot be dissolved at any time.[2]
A person must satisfy all following conditions to be qualified to become a Member of Parliament of the Rajya Sabha;
A person would be ineligible for being a Member of the Rajya Sabha if the person;
"Strength of Member of parliament in Lok Sabha as defined in Article 80 of the Constitution of India",
How did charcot marie tooth disease get its name?
named after those who classically described it🚨CharcotÿMarieÿTooth disease (CMT) is one of the hereditary motor and sensory neuropathies, a group of varied inherited disorders of the peripheral nervous system characterized by progressive loss of muscle tissue and touch sensation across various parts of the body. Currently incurable, this disease is the most commonly inherited neurological disorder, and affects approximately 1 in 2,500 people.[1][2] CMT was previously classified as a subtype of muscular dystrophy.[1]
Symptoms of CMT usually begin in early childhood or early adulthood, but can begin later. Some people do not experience symptoms until their early thirties or forties. Usually, the initial symptom is foot drop early in the course of the disease. This can also cause hammer toe, where the toes are always curled. Wasting of muscle tissue of the lower parts of the legs may give rise to a "stork leg" or "inverted champagne bottle" appearance. Weakness in the hands and forearms occurs in many people as the disease progresses.
Loss of touch sensation in the feet, ankles and legs, as well as in the hands, wrists and arms occur with various types of the disease. Early and late onset forms occur with 'on and off' painful spasmodic muscular contractions that can be disabling when the disease activates. High-arched feet (pes cavus) or flat-arched feet (pes planus) are classically associated with the disorder.[3] Sensory and proprioceptive nerves in the hands and feet are often damaged, while unmyelinated pain nerves are left intact. Overuse of an affected hand or limb can activate symptoms including numbness, spasm, and painful cramping.
Symptoms and progression of the disease can vary. Involuntary grinding of teeth as well as squinting are prevalent and often go unnoticed by the person affected. Breathing can be affected in some; so can hearing, vision, as well as the neck and shoulder muscles. Scoliosis is common, causing hunching and loss of height. Hip sockets can be malformed. Gastrointestinal problems can be part of CMT,[4][5] as can difficulty chewing, swallowing, and speaking (due to atrophy of vocal cords).[6] A tremor can develop as muscles waste. Pregnancy has been known to exacerbate CMT, as well as severe emotional stress. Patients with CMT must avoid periods of prolonged immobility such as when recovering from a secondary injury as prolonged periods of limited mobility can drastically accelerate symptoms of CMT.[7]
Pain due to postural changes, skeletal deformations, muscle fatigue and cramping is fairly common in people with CMT. It can be mitigated or treated by physical therapies, surgeries, and corrective or assistive devices. Analgesic medications may also be needed if other therapies do not provide relief from pain.[8] Neuropathic pain is often a symptom of CMT, though, like other symptoms of CMT, its presence and severity varies from case to case. For some people, pain can be significant to severe and interfere with daily life activities. However, pain is not experienced by all people with CMT. When neuropathic pain is present as a symptom of CMT, it is comparable to that seen in other peripheral neuropathies, as well as postherpetic neuralgia and complex regional pain syndrome, among other diseases.[9]
CharcotÿMarieÿTooth disease is caused by mutations that cause defects in neuronal proteins. Nerve signals are conducted by an axon with a myelin sheath wrapped around it. Most mutations in CMT affect the myelin sheath, but some affect the axon.
The most common cause of CMT (70ÿ80% of the cases) is the duplication of a large region on the short arm of chromosome 17 that includes the gene PMP22. Some mutations affect the gene MFN2, which codes for a mitochondrial protein. Cells contain separate sets of genes in their nucleus and in their mitochondria. In nerve cells, the mitochondria travel down the long axons. In some forms of CMT, mutated MFN2 causes the mitochondria to form large clusters, or clots, which are unable to travel down the axon towards the synapses. This prevents the synapses from functioning.[10]
CMT is divided into the primary demyelinating neuropathies (CMT1, CMT3, and CMT4) and the primary axonal neuropathies (CMT2), with frequent overlap. Another cell involved in CMT is the Schwann cell, which creates the myelin sheath, by wrapping its plasma membrane around the axon.[11]
Neurons, Schwann cells, and fibroblasts work together to create a functional nerve. Schwann cells and neurons exchange molecular signals that regulate survival and differentiation. These signals are disrupted in CMT.[11]
Demyelinating Schwann cells causes abnormal axon structure and function. They may cause axon degeneration, or they may simply cause axons to malfunction.[1]
The myelin sheath allows nerve cells to conduct signals faster. When the myelin sheath is damaged, nerve signals are slower, and this can be measured by a common neurological test, electromyography. When the axon is damaged, on the other hand, this results in a reduced compound muscle action potential (CMAP).[12]
CMT can be diagnosed through symptoms, through measurement of the speed of nerve impulses (nerve conduction studies), through biopsy of the nerve, and through DNA testing. DNA testing can give a definitive diagnosis, but not all the genetic markers for CMT are known. CMT is first noticed when someone develops lower leg weakness, such as foot drop; or foot deformities, including hammertoes and high arches. But signs alone do not lead to diagnosis. Patients must be referred to a physician specialising in neurology or rehabilitation medicine. To see signs of muscle weakness, the neurologist asks patients to walk on their heels or to move part of their leg against an opposing force. To identify sensory loss, the neurologist tests for deep tendon reflexes, such as the knee jerk, which are reduced or absent in CMT. The doctor also asks about family history, because CMT is hereditary. The lack of family history does not rule out CMT, but helps rule out other causes of neuropathy, such as diabetes or exposure to certain chemicals or drugs.[13]
In 2010, CMT was one of the first diseases where the genetic cause of a particular patient's disease was precisely determined by sequencing the whole genome of an affected individual. This was done by the scientists employed by the Charcot Marie Tooth Association (CMTA)[14][15] Two mutations were identified in a gene, SH3TC2, known to cause CMT. Researchers then compared the affected patient's genome to the genomes of the patient's mother, father, and seven siblings with and without the disease. The mother and father each had one normal and one mutant copy of this gene, and had mild or no symptoms. The offspring that inherited two mutant genes presented fully with the disease.
The constant cycle of demyelination and remyelination, which occurs in CMT, can lead to the formation of layers of myelin around some nerves, termed an "onion bulb". These are also seen in chronic inflammatory demyelinating polyneuropathy (CIDP).[16] Muscles show fiber type grouping, a similarly non-specific finding that indicates a cycle of denervation/reinnervation. Normally, type I and type II muscle fibers show a checkerboard-like random distribution. However, when reinnervation occurs, the group of fibers associated with one nerve are of the same type. The standard for indicating fiber type is histoenzymatic adenosine triphosphatase (ATPase at pH 9.4).[17]
CMT is a result of genetic mutations in a number of genes.[18] Based on the affected gene, CMT can be categorized into types and subtypes.[15]
Often the most important goal for patients with CMT is to maintain movement, muscle strength, and flexibility. Therefore, an interprofessional team approach with occupational therapy, physical therapy, orthotist, podiatrist and or orthopedic surgeon is recommended.[19] PT typically focuses on muscle strength training, muscle, and ligament stretching while OT can provide education on energy conservation strategies and moderate aerobic exercise in activities of daily living. Physical therapy should be involved in designing an exercise program that fits a person's personal strengths and flexibility. Bracing can also be used to correct problems caused by CMT. An orthotist may address gait abnormalities by prescribing the use of ankle-foot orthoses (AFOs). These orthoses help control foot drop and ankle instability and often provide a better sense of balance for patients. Appropriate footwear is also very important for people with CMT, but they often have difficulty finding well-fitting shoes because of their high arched feet and hammer toes. Due to the lack of good sensory reception in the feet, CMT patients may also need to see a podiatrist for help in trimming nails or removing calluses that develop on the pads of the feet. A final decision a patient can make is to have surgery. Using a podiatrist or an orthopedic surgeon, patients can choose to stabilize their feet or correct progressive problems. These procedures include straightening and pinning the toes, lowering the arch, and sometimes, fusing the ankle joint to provide stability.[7] CMT patients must take extra care to avoid falling because fractures take longer to heal in someone with an underlying disease process. Additionally, the resulting inactivity may cause the CMT to worsen.[7]
The Charcot-Marie-Tooth Association classifies the chemotherapy drug vincristine as a "definite high risk" and states that "vincristine has been proven hazardous and should be avoided by all CMT patients, including those with no symptoms."[20]
There are also several corrective surgical procedures that can be done to improve physical condition.[21]
The severity of symptoms vary widely even for the same type of CMT. There have been cases of monozygotic twins with varying levels of disease severity, showing that identical genotypes are associated with different levels of severity (see penetrance). Some patients are able to live a normal life and are almost or entirely asymptomatic.[22] A 2007 review stated that "Life expectancy is not known to be altered in the majority of cases".[23]
The disease is named after those who classically described it: Jean-Martin Charcot (1825ÿ1893), his pupil Pierre Marie (1853ÿ1940) ("Sur une forme particulire d'atrophie musculaire progressive, souvent familiale dbutant par les pieds et les jambes et atteignant plus tard les mains". Revue mdicale. Paris. 6: 97ÿ138. 1886.?), and Howard Henry Tooth (1856ÿ1925) ("The peroneal type of progressive muscular atrophy", dissertation, London, 1886.)
Ankyrin: Long QT syndrome 4
When did recreational pot become legal in california?
November 2016🚨Cannabis in California is permitted, subject to regulations, for both medical and recreational use. In recent decades the state has led the country in efforts to legalize cannabis, holding the first (unsuccessful) vote to decriminalize it in 1972 and, through Proposition 215, becoming the first state to legalize it for medical use in 1996. In the November 2016 election, voters passed an amendment legalizing recreational use of marijuana.[1]
Industrial hemp was first grown in what is now known as California as early as 1801 in what is now San Jose, with the state producing 13,000 pounds in 1807, and 220,000 pounds in 1810.[2]
The Poison Act was passed in 1907, and in 1913 an amendment (Stats. 1913, Ch. 342, p.?697) was made to make possession of "extracts, tinctures, or other narcotic preparations of hemp, or loco-weed, their preparations and compounds" a misdemeanor.[3] There's no evidence that the law was ever used or intended to restrict pharmaceutical cannabis; instead it was a legislative mistake, and in 1915 another amendment (Stats. 1915, Ch. 604, pp.?1067ÿ1068) forbade the sale or possession of "flowering tops and leaves, extracts, tinctures and other narcotic preparations of hemp or loco weed (Cannabis sativa), Indian hemp" except with a prescription.[3] Both bills were drafted and supported by the California State Board of Pharmacy.[3]
In 1925, possession, which had previously been treated the same as distribution, became punishable by up to 6 years in prison, and black market sale, which had initially been a misdemeanor punishable by a $100ÿ$400 fine and/or 50ÿ180 days in jail for first offenders, became punishable by 6 monthsÿ6 years.[3] In 1927, the laws designed to target opium usage were finally extended to Indian hemp.[3] In 1929, second offenses for possession became punishable by sentences of 6 monthsÿ10 years.[3] In 1937, cannabis cultivation became a separate offense.[3] In 1954, penalties for marijuana possession were hiked to a minimum 1ÿ10 years in prison, and sale was made punishable by 5ÿ15 years with a mandatory 3 years before eligibility for parole; two prior felonies raised the maximum sentences for both offenses to life imprisonment.[3]
Proposition 19, a ballot proposition previously attempting to decriminalize marijuana, was defeated in the November 1972 state election by a 66.5% majority.[4] In 1973, California's neighboring state of Oregon became the first state to decriminalize cannabis.[5]
Decriminalization of marijuana, which treats the possession of small amounts of the drug as a civil, rather than a criminal, offense, was established in July 1975 when the Legislature passed Senate Bill 95, the Moscone Act.[3][5][6][7] SB 95 made possession of one ounce (28.5?grams) of marijuana a misdemeanor punishable by a $100 fine[8] (with the assessments added to fines in California, this will total about $480), with higher punishments for amounts greater than one ounce, for possession on school grounds, or for cultivation.[9]
Proposition 36 (also known as the Substance Abuse and Crime Prevention Act of 2000) was approved by 61% of voters, requiring that "first and second offense drug violators be sent to drug treatment programs instead of facing trial and possible incarceration."[10]
On September 30, 2010, Governor Arnold Schwarzenegger signed into law CA State Senate Bill 1449, which further reduced the charge of possession of one ounce of cannabis or less, from a misdemeanor to an infraction, similar to a traffic violationa maximum of a $100 fine and no mandatory court appearance or criminal record.[11] The law became effective January 1, 2011.
California's medical cannabis program was established when state voters approved Proposition 215 (also known as the Compassionate Use Act of 1996)[12] on the November 5, 1996 ballot with a 55% majority.[13] The proposition added Section 11362.5 to the California Health and Safety Code, modifying state law to allow people with cancer, anorexia, AIDS, spasticity, glaucoma, arthritis, migraines or other chronic illnesses the "legal right to obtain or grow, and use marijuana for medical purposes when recommended by a doctor". The law also mandated that doctors not be punished for recommending the drug, and required that federal and state governments work together "to implement a plan to provide for the safe and affordable distribution of marijuana to all patients in medical need."[12][13]
Vague wording became a major criticism of Prop. 215, though the law has since been clarified through California Supreme Court rulings and the passage of subsequent laws. The first solution of this came by legislation that established statewide guidelines for Proposition 215 by Senate Bill 420 in January 2003. To differentiate patients from non-patients, Governor Gray Davis signed California Senate Bill 420 (colloquially known as the Medical Marijuana Program Act) in 2003, establishing an identification card system for medical marijuana patients. SB 420 also allows for the formation of patient collectives, or non-profit organizations, to provide the drug to patients. In January, 2010, the California Supreme Court ruled in People v. Kelly, that SB 420 did not limit the quantity a patient can possess and all possession limits on medical marijuana in California were lifted.
On October 7, 2011, an extensive and coordinated crackdown on California's marijuana dispensaries was announced by the chief prosecutors of the state's four federal districts.,[14] which caused much widespread panic within the social community thereof. These subsided in the time following.
In February 2009, Tom Ammiano introduced the Marijuana Control, Regulation, and Education Act, which would remove penalties under state law for the cultivation, possession, and use of marijuana for persons the age of 21 or older. When the Assembly Public Safety Committee approved the bill on a 4 to 3 vote in January 2010, this marked the first time in United States history that a bill legalizing marijuana passed a legislative committee. While the legislation failed to reach the Assembly floor, Ammiano stated his plans to reintroduce the bill later in the year, depending on the success of Proposition 19, the Regulate, Control and Tax Cannabis Act.[15] According to Time, California tax collectors estimated the bill would have raised about $1.3 billion a year in revenue.
In November 2010, California voters rejected Proposition 19, by a vote of 53.5% to 46.5%, an initiative that would have made possession and cultivation of cannabis for recreational usage legal for adults the age of 21 or older, and would regulate it similarly to alcohol.[16]
Critics such as John Lovell, lobbyist for the California Peace Officers' Association, argued that too many people already struggle with alcohol and drug abuse, and legalizing another mind-altering substance would lead to a surge of use, making problems worse.[17] Apart from helping the state's budget by enforcing a tax on the sale of cannabis, proponents of the bill argued that legalization would reduce the amount of criminal activity associated with the drug.
On November 8, 2016, Proposition 64, also known as the Adult Use of Marijuana Act, passed by a vote of 57% to 43%, legalizing the sale and distribution of cannabis in both a dry and concentrated form. California is one of eight states where recreational cannabis usage is legal, including Alaska, Colorado, Maine, Massachusetts, Oregon, Washington, and Nevada. Adults are allowed to possess up to one ounce of cannabis for recreational use and can grow up to six live plants individually or more commercially with a license. Licenses will be issued for cultivation and business establishment beginning in 2018.
California was the first state to establish a medical marijuana program, enacted by Proposition 215 in 1996 and Senate Bill 420 in 2003. Prop. 215, also known as the Compassionate Use Act allows people the right to obtain and use cannabis for any illness if they obtain a recommendation from a doctor. California's Supreme Court has ruled there are no specified limits as to what a patient may possess in their private residence if the cannabis is strictly for the patient's own use.[18] Medical cannabis identification cards are issued through the California Department of Public Health's Medical Marijuana Program (MMP). The program began in three counties in May 2005, and expanded statewide in August of the same year. 37,236 cards have been issued throughout 55 counties as of December 2009. However, cannabis dispensaries within the state accept recommendations, with an embossed license, from a doctor who has given the patient an examination and believes cannabis would be beneficial for their ailment.
Critics of California's medical cannabis program argued that the program essentially gave cannabis quasi-legality, as "anyone can obtain a recommendation for medical marijuana at any time for practically any ailment".[16] Acknowledging that there were instances in which the system was abused and that laws could be improved, Stephen Gutwillig of the Drug Policy Alliance[19] insisted that the passages of Proposition 215 were "nothing short of incredible". Gutwillig argued that because of the law, 200,000 patients in the state had safe and affordable access to medical cannabis to relieve pain and treat medical conditions, without having to risk arrest or buy off the black market.[16] Twelve other U.S. states have followed California's lead to enact medical marijuana laws of their own: Alaska, Colorado, Hawaii, Maine, Michigan, Montana, Nevada, New Jersey, New Mexico, Oregon, Rhode Island, Vermont, and Washington.[20]
Recreational usage of marijuana is legal under Proposition 64. Immediately upon certification of the November, 2016 ballot results, adults the age of 21 or older were allowed to:
Users may not:
Legal sales for non-medical use are allowed by law beginning January 1, 2018, following formulation of new regulations on retail market by the state's Bureau of Medical Cannabis Regulation (to be renamed Bureau of Marijuana Control).[22][23]
Proposition 64 is not meant in any way to affect, amend, or restrict the statutes provided for medical cannabis in California under Proposition 215.[24]
What channel is the 100 tv show on?
The CW🚨The 100 (pronounced The Hundred?[1]) is an American post-apocalyptic science fiction drama television series that premiered on March 19, 2014, on The CW.[2] The series, developed by Jason Rothenberg, is loosely based on the 2013 book of the same name, the first in a series by Kass Morgan.[3]
The series follows a group of post apocalypse survivors, representing many age groups: Clarke Griffin (Eliza Taylor), Bellamy Blake (Bob Morley), Octavia Blake (Marie Avgeropoulos), Jasper Jordan (Devon Bostick), Monty Green (Christopher Larkin), Raven Reyes (Lindsey Morgan), Finn Collins (Thomas McDonell), John Murphy (Richard Harmon), and Wells Jaha (Eli Goree). They are among the first people from a space habitat, "The Ark", to return to Earth after a devastating nuclear apocalypse. The series also focuses on Dr. Abby Griffin (Paige Turco), Clarke's mother; Marcus Kane (Henry Ian Cusick), a council member on the Ark, and Thelonious Jaha (Isaiah Washington), the Chancellor of the Ark and Wells' father.
In March 2016, The 100 was renewed for a fourth season of 13 episodes, which premiered on February 1, 2017.[4][5][6] In March 2017, The CW renewed the series for a fifth season.[7]
The series is set 97 years after a devastating nuclear apocalypse wiped out almost all life on Earth. Over 2,400 survivors live on a single massive station in Earth's orbit called "The Ark". After the Ark's life-support systems are found to be failing, 100 juvenile prisoners are sent to the surface in a last attempt to determine whether Earth is habitable. They discover not all humanity was destroyed and some survived the apocalypse: the Grounders, who live in clans locked in a power struggle; the Reapers, another group of Grounders who have become cannibals; and Mountain Men, who live in Mount Weather and locked themselves away before the apocalypse.
In the second season, the remaining 48 of the 100 are captured and taken to Mount Weather by the Mountain Men. It is eventually revealed that the Mountain Men are transfusing blood from imprisoned Grounders as an anti-radiation treatment. Medical tests of the 100 show an even more potent anti-radiation efficacy; their bone marrow will allow the mountain men to survive outside containment. Meanwhile, the inhabitants of the Ark have successfully crash-landed various stations on Earth and begun an alliance with the Grounders to save groups of people, naming the main settlement at Alpha Station "Camp Jaha".
In the third season, Camp Jaha, now renamed "Arkadia", comes under new management when Pike, a former teacher and mentor, is elected as chancellor and begins a war with the Grounders. An artificial intelligence named A.L.I.E. was commanded to make life better for mankind and responded by solving the problem of human overpopulation by launching a nuclear apocalypse that devastated Earth. The AI takes over the minds of nearly everyone in Arkadia and Polis, the capital city of the Grounders. In the season three finale, Clarke manages to destroy A.L.I.E.
In the fourth season, hundreds of nuclear reactors around the world are melting down due to decades of neglect that will result in 96% of the planet becoming uninhabitable. Clarke and the others investigate ways to survive the coming wave of radiation. When it is discovered that Nightbloods, descendants of first, original Nightbloods, including Becca, the first grounder Commander and creator of A.L.I.E., can metabolize radiation, Clarke and the others attempt to recreate the formula, but their attempts failed. An old bunker is discovered that can protect 1,200 people for over 5 years; each of the twelve clans select a hundred people to stay in the bunker. A small group decides to return to space and survive in the remnants of the original Ark.
Post production, including ADR recording for the series, was done at the recording studio Cherry Beach Sound.[15]
David J. Peterson, who created Dothraki for Game of Thrones, developed the Trigedasleng language for The Grounders. Jason Rothenberg said it was similar to Creole English.[16]
In Canada, Season 1 of The 100 was licensed exclusively to Netflix. The series premiered on March 20, 2014, the day after the mid-season premiere of Season 1 on the CW.[17]
In New Zealand, the series premiered on TVNZ's on-demand video streaming service on March 21, 2014.[18]
In the UK and Ireland, The 100 premiered on E4 on July 7, 2014.[19] The first episode was viewed by an average audience of 1.39 million, making it the channel's biggest ever program launch. Season 2 premiered on January 6, 2015, and averaged 1,118,000 viewers.[20] Season 3 premiered on February 17, 2016.[21][22]
In Australia, The 100 was originally scheduled to premiere on Go![23] but instead premiered on Fox8 on September 4, 2014.[24] Season 2 premiered on January 8, 2015.[25]
On Rotten Tomatoes, the show's first season was certified "fresh", with 72% of professional reviewers reviewing it positively and the consensus: "Although flooded with stereotypes, the suspenseful atmosphere helps make The 100 a rare high-concept guilty pleasure." On Metacritic, the first season scores 63 out of 100 points, indicating "generally favorable reviews."[26]
The second season was met with more favorable reviews, holding a rating of 100% on Rotten Tomatoes.[27] In a review of the season 2 finale, Kyle Fowle of The A.V. Club said, "Very few shows manage to really push the boundaries of moral compromise in a way that feels legitimately difficult. Breaking Bad did it. The Sopranos did it. Game of Thrones has done it. Those shows never back down from the philosophical murkiness of their worlds, refusing to provide a tidy, happy ending if it doesn't feel right. With 'Blood Must Have Blood, Part Two,' The 100 has done the same, presenting a finale that doesn't shy away from the morally complex stakes it's spent a whole season building up".[28] Maureen Ryan of The Huffington Post, in another positive review, wrote: "I can say with some assurance that I've rarely seen a program demonstrate the kind of consistency and thematic dedication that The 100 has shown in its first two seasons. This is a show about moral choices and the consequences of those choices, and it's been laudably committed to those ideas from Day 1."[29]
On Rotten Tomatoes, the third season received an overall rating of 100%.[30] Maureen Ryan of Variety wrote in an early review of the third season: "When looking at the epic feel and varied array of stories on display in season three, which overtly and covertly recalls "The Lord of the Rings" saga in a number of ways, it's almost hard to recall how limited the scope and the ambitions of "The 100" were two years ago, when a rag-tag band of survivors first crash-landed on Earth. In season three (which the cast and showrunner previewed here), the show is more politically complicated than ever, and the world-building that accompanies the depiction of various factions, alliances and conflicts is generally admirable."[31] In a review of the season 3 finale "Perverse Instantiation: Part Two", Mariya Karimjee of Vulture.com wrote: "Every moment of this finale is pitch-perfect: the choreography of the fight scenes, the plotting and pacing, and the stunning way in which the episode finally reaches it apex. "Perverse Instantiation: Part Two" elevates the season's themes and pulls together its disparate story lines, setting us up nicely for season four."[32] In another review of the season 3 finale and the season overall, Kyle Fowle of The A.V. Club wrote: "Before we even get to tonight's action-packed finale of The 100, it needs to be said that this has been a rocky season. The first half of it was defined by shoddy character motivations and oversized villains. The second half of this season has done some work to bring the show back from the brink, focusing on the City Of Light and issues of freewill and difficult moral choices, bringing some much needed depth to the third season. That work pays off with "Perverse Instantiation: Part Two," a thrilling, forward-thinking finale that provides some necessary closure to this season." He gave the finale itself an "A-" rating.[33]
Brian Lowry of The Boston Globe said: "Our attraction to Apocalypse TV runs deep, as our culture plays out different futuristic possibilities. That's still no reason to clone material, nor is it a reason to deliver characters who are little more than stereotypes."[34] Allison Keene of The Hollywood Reporter wrote a negative review, stating: "The sci-fi drama presents The CW's ultimate vision for humanity: an Earth populated only by attractive teenagers, whose parents are left out in space."[35] Kelly West of Cinema Blend gave it a more positive review while noting: "CW's Thrilling New Sci-fi Drama Is A Keeper. CW's The 100 seeks to explore that concept and more with a series that's about equal parts young adult drama, sci-fi adventure and thriller. It takes a little while for the series to warm up, but when The 100 begins to hit its stride, a unique and compelling drama begins to emerge."[36] IGN's editor Eric Goldman also gave the show a more positive review, writing: "Overcoming most of its early growing pains pretty quickly, The 100 was a very strong show by the end of its first season. But Season 2 elevated the series into the upper echelon, as the show become one of the coolest and most daring series on TV these days."[37] Maureen Ryan of Variety named the show one of the best of 2015.[38]
In 2016, the year Rolling Stone ranked the show #36 on its list of the "40 Best Science Fiction TV Shows of All Time",[39] the episode "Thirteen" attracted criticism when Lexa, one of the series' LGBT characters, was killed off. Critics and fans considered the death a continuation of a persistent trope in television in which LGBT characters are killed off far more often than others ÿ implicitly portraying them as disposable, as existing only to serve the stories of straight characters, or to attract viewers. A widespread debate among writers and fans about the trope ensued, with Lexa's death cited as a prime example of the trope, and why it should end.[40][41][42] Showrunner Jason Rothenberg eventually wrote in response that "I (...) write and produce television for the real world where negative and hurtful tropes exist. And I am very sorry for not recognizing this as fully as I should have".[43]
An estimated 2.7 million American viewers watched the series premiere, which received an 18ÿ49 rating of 0.9, making it the most-watched show in its time slot on The CW since 2010, with the series Life Unexpected.[55]
Founded by the phoenicians around 800 b.c the city of?
Carthage🚨Coordinates: 340725N 353904E? / ?34.12361N 35.65111E? / 34.12361; 35.65111
Phoenicia (/f??n???/;[3] from the Ancient Greek: ?ٻ, Phoink) was a thalassocratic ancient Semitic speaking Mediterranean civilization that originated in the Levant in the west of the Fertile Crescent. Scholars generally agree that it included the coastal areas of today's Lebanon, northern Israel and southern Syria reaching as far north as Arwad, but there is some dispute as to how far south it went, the furthest suggested area being Ashkelon.[4] Its colonies later reached the Western Mediterranean, such as Cdiz in Spain and most notably Carthage in North Africa, and even the Atlantic Ocean. The civilization spread across the Mediterranean between 1500 BC and 300 BC.
Phoenicia is an ancient Greek term used to refer to the major export of the region, cloth dyed Tyrian purple from the Murex mollusc, and referred to the major Canaanite port towns; not corresponding precisely to Phoenician culture as a whole as it would have been understood natively. Their civilization was organized in city-states, similar to those of ancient Greece,[5] perhaps the most notable of which were Tyre, Sidon, Arwad, Berytus, Byblos and Carthage.[6] Each city-state was a politically independent unit, and it is uncertain to what extent the Phoenicians viewed themselves as a single nationality. In terms of archaeology, language, lifestyle, and religion there was little to set the Phoenicians apart as markedly different from other residents of the Levant, such as their close relatives and neighbors, the Israelites.[7]
Around 1050 BC, a Phoenician alphabet was used for the writing of Phoenician.[8] It became one of the most widely used writing systems, spread by Phoenician merchants across the Mediterranean world, where it evolved and was assimilated by many other cultures, including the Roman alphabet used by Western Civilization today.[9]
The name Phoenicians, like Latin Poenؐ (adj. poenicus, later pnicus), comes from Greek ?ٚ? (Phonikes). The word ? pho?nix meant variably "Phoenician person", "Tyrian purple, crimson" or "date palm" and is attested with all three meanings already in Homer.[10] (The mythical bird phoenix also carries the same name, but this meaning is not attested until centuries later.) The word may be derived from ?? phoin܇s "blood-red",[11] itself possibly related to ?? ph܇nos "murder".
It is difficult to ascertain which meaning came first, but it is understandable how Greeks may have associated the crimson or purple color of dates and dye with the merchants who traded both products. Robert S. P. Beekes has suggested a pre-Greek origin of the ethnonym.[12] The oldest attested form of the word in Greek may be the Mycenaean po-ni-ki-jo, po-ni-ki, possibly borrowed from Ancient Egyptian: fn?w [13] (literally "carpenters", "woodcutters"; likely in reference to the famed Lebanon cedars for which the Phoenicians were well-known), although this derivation is disputed.[14] The folk etymological association of ?ٻ with ? mirrors that in Akkadian, which tied kina?ni, kina??i "Canaan" to kina??u "red-dyed wool".[15][16]
The land was natively known as kn?n (compare Eblaite ka-na-na-um, phn|ka-na-na) and its people as the kn?ny. In the Amarna letters of the 14th century BC, people from the region called themselves Kenaani or Kinaani, in modern English understood as/equivalent to Canaanite. Much later, in the sixth century BC, Hecataeus of Miletus writes that Phoenicia was formerly called ϫ khna, a name that Philo of Byblos later adopted into his mythology as his eponym for the Phoenicians: "Khna who was afterwards called Phoinix".[17] The ethnonym survived in North Africa until the fourth century AD (see Punic language).
Herodotus's account (written c. 440 BC) refers to the myths of Io and Europa.
According to the Persians best informed in history, the Phoenicians began the quarrel. These people, who had formerly dwelt on the shores of the Erythraean Sea, having migrated to the Mediterranean and settled in the parts which they now inhabit, began at once, they say, to adventure on long voyages, freighting their vessels with the wares of Egypt and Assyria?...
The Greek historian Strabo believed that the Phoenicians originated from Bahrain.[18] Herodotus also believed that the homeland of the Phoenicians was Bahrain.[19][20] This theory was accepted by the 19th-century German classicist Arnold Heeren who said that: "In the Greek geographers, for instance, we read of two islands, named Tyrus or Tylos, and Aradus, which boasted that they were the mother country of the Phoenicians, and exhibited relics of Phoenician temples."[21] The people of Tyre in South Lebanon in particular have long maintained Persian Gulf origins, and the similarity in the words "Tylos" and "Tyre" has been commented upon.[22] The Dilmun civilization thrived in Bahrain during the period 2200ÿ1600 BC, as shown by excavations of settlements and Dilmun burial mounds. However, some claim there is little evidence of occupation at all in Bahrain during the time when such migration had supposedly taken place.[23]
Canaanite culture apparently developed in situ from the earlier Ghassulian chalcolithic culture. Ghassulian itself developed from the Circum-Arabian Nomadic Pastoral Complex, which in turn developed from a fusion of their ancestral Natufian and Harifian cultures with Pre-Pottery Neolithic B (PPNB) farming cultures, practicing the domestication of animals, during the 6200?BC climatic crisis which led to the Neolithic Revolution in the Levant.[24] Byblos is attested as an archaeological site from the Early Bronze Age. The Late Bronze Age state of Ugarit is considered quintessentially Canaanite archaeologically,[25] even though the Ugaritic language does not belong to the Canaanite languages proper.[26][27]
The Phoenician alphabet consists of 22 letters, all consonants.[9] Starting around 1050 BC,[27] this script was used for the writing of Phoenician, a Northern Semitic language. It is believed to be one of the ancestors of modern alphabets.[28][29] By their maritime trade, the Phoenicians spread the use of the alphabet to Anatolia, North Africa, and Europe, where it was adopted by the Greeks who developed it into an alphabetic script to have distinct letters for vowels as well as consonants.[30][31]
The name "Phoenician" is by convention given to inscriptions beginning around 1050 BC, because Phoenician, Hebrew, and other Canaanite dialects were largely indistinguishable before that time.[27][8] The so-called Ahiram epitaph, engraved on the sarcophagus of king Ahiram from about 1000 BC shows essentially a fully developed Phoenician script.[32][33][34]
The Phoenicians were among the first state-level societies to make extensive use of alphabets: the family of Canaanite languages, spoken by Israelites, Phoenicians, Amorites, Ammonites, Moabites and Edomites, was the first historically attested group of languages to use an alphabet, derived from the Proto-Canaanite script, to record their writings. The Proto-Canaanite script uses around 30 symbols but was not widely used until the rise of new Semitic kingdoms in the 13th and 12th centuries BC.[35] The Proto-Canaanite script is derived from Egyptian hieroglyphs.[36]
Fernand Braudel remarked in The Perspective of the World that Phoenicia was an early example of a "world-economy" surrounded by empires. The high point of Phoenician culture and sea power is usually placed c. 1200ÿ800?BC. Archaeological evidence consistent with this understanding has been difficult to identify. A unique concentration in Phoenicia of silver hoards dated between 1200 and 800 BC, however, contains hacksilver with lead isotope ratios matching ores in Sardinia and Spain.[37] This metallic evidence agrees with the biblical attestation of a western Mediterranean Tarshish said to have supplied King Solomon of Israel with silver via Phoenicia, during the latter's heyday (see 'trade', below).[38]
Many of the most important Phoenician settlements had been established long before this: Byblos, Tyre in South Lebanon, Sidon, Simyra, Arwad, and Berytus, the capital of Lebanon, all appear in the Amarna tablets.
The league of independent city-state ports, with others on the islands and along other coasts of the Mediterranean Sea, was ideally suited for trade between the Levant area, rich in natural resources, and the rest of the ancient world. Around 1200?BC, a series of poorly-understood events weakened and destroyed the adjacent Egyptian and Hittite empires. In the resulting power vacuum, a number of Phoenician cities rose as significant maritime powers.
Phoenician societies rested on three power-bases: the king; temples and their priests; and councils of elders. Byblos first became the predominant center from where the Phoenicians dominated the Mediterranean and Erythraean (Red) Sea routes. It was here that the first inscription in the Phoenician alphabet was found, on the sarcophagus of Ahiram (c. 1200?BC).[citation needed]
Later, Tyre in South Lebanon gained in power. One of its kings, the priest Ithobaal (887ÿ856?BC), ruled Phoenicia as far north as Beirut, and part of Cyprus. Carthage was founded in 814?BC under Pygmalion of Tyre (820ÿ774?BC).[citation needed] The collection of city-states constituting Phoenicia came to be characterized by outsiders and the Phoenicians as Sidonia or Tyria. Phoenicians and Canaanites alike were called Sidonians or Tyrians, as one Phoenician city came to prominence after another.
Persian King Cyrus the Great conquered Phoenicia in 539?BC. The Persians then divided Phoenicia into four vassal kingdoms: Sidon, Tyre, Arwad, and Byblos. They prospered, furnishing fleets for Persian kings. Phoenician influence declined after this. In 350 or 345?BC, a rebellion in Sidon led by Tennes was crushed by Artaxerxes III. Its destruction was described by Diodorus Siculus.
Alexander the Great took Tyre in 332?BC after the Siege of Tyre. Alexander was exceptionally harsh to Tyre, executing 2,000 of the leading citizens, but he maintained the king in power. He gained control of the other cities peacefully: the ruler of Aradus submitted; the king of Sidon was overthrown. The rise of Macedon gradually ousted the remnants of Phoenicia's former dominance over the Eastern Mediterranean trade routes. Phoenician culture disappeared entirely in the motherland. Carthage continued to flourish in Northwest Africa. It oversaw the mining of iron and precious metals from Iberia, and used its considerable naval power and mercenary armies to protect commercial interests. Rome finally destroyed it in 146?BC, at the end of the Punic Wars.
Following Alexander, the Phoenician homeland was controlled by a succession of Macedonian rulers: Laomedon (323?BC), Ptolemy I (320), Antigonus II (315), Demetrius (301), and Seleucus (296). Between 286 and 197?BC, Phoenicia (except for Aradus) fell to the Ptolemies of Egypt, who installed the high priests of Astarte as vassal rulers in Sidon (Eshmunazar I, Tabnit, Eshmunazar II).
In 197?BC, Phoenicia along with Syria reverted to the Seleucids. The region became increasingly Hellenized, although Tyre became autonomous in 126?BC, followed by Sidon in 111. Syria, including Phoenicia, was seized and ruled by king Tigranes the Great of Armenia from 82 until 69?BC, when he was defeated by Lucullus. In 65?BC, Pompey finally incorporated the territory as part of the Roman province of Syria. Phoenicia became a separate province c. 200 AD.
A study by Pierre Zalloua and others (2008) claimed that six subclades of haplogroup J2 (J-M172) J2 in particular, were "a Phoenician signature" amongst modern male populations tested in "the coastal Lebanese Phoenician Heartland and the broader area of the rest of the Levant (the "Phoenician Periphery")", followed by "Cyprus and South Turkey; then Crete; then Malta and East Sicily; then South Sardinia, Ibiza, and Southern Spain; and, finally, Coastal Tunisia and cities like Tingris [sic] in Morocco". (Samples from other areas with significant Phoenician settlements, in Libya and southern France could not be included.) This deliberately sequential sampling represented an attempt to develop a methodology that could link the documented historical expansion of a population, with a particular geographic genetic pattern or patterns. The researchers suggested that the proposed genetic signature stemmed from "a common source of related lineages rooted in Lebanon".[40]
None of the geographical communities tested, Zalloua pointed out subsequently (2013), carried significantly higher levels of the proposed "Phoenician signature" than the others. This suggested that genetic variation preceded religious variation and divisions and, by the time it became Phoenicia, "Lebanon already had well-differentiated communities with their own genetic peculiarities, but not significant differences, and religions came as layers of paint on top." [41] Another study found evidence for genetic persistence on the island of Ibiza.[42]
Levantine Semites Lebanese, Jews, Palestinians, and Syrians are thought to be the closest surviving relatives of the ancient Phoenicians, with as much as 90% genetic similarity between modern Lebanese and Bronze Age Sidonians.[43][44][45]
In 2016, a sixth-century BC skeleton of a young Carthaginian man, excavated from a Punic tomb in Byrsa Hill, was found to belong to the rare U5b2c1 maternal haplogroup. The lineage of this "Young Man of Byrsa" is believed to represent early gene flow from Iberia to the Maghreb.[46]
The Phoenicians were among the greatest traders of their time and owed much of their prosperity to trade. At first, they traded mainly with the Greeks, trading wood, slaves, glass and powdered Tyrian purple. Tyrian purple was a violet-purple dye used by the Greek elite to color garments. In fact, the word Phoenician derives from the ancient Greek word phonios meaning "purple". As trading and colonizing spread over the Mediterranean, Phoenicians and Greeks seemed to have split that sea in two: the Phoenicians sailed along and eventually dominated the southern shore, while the Greeks were active along the northern shores. The two cultures rarely clashed, mainly in the Sicilian Wars, and eventually settled into two spheres of influence, the Phoenician in the west and the Greek to the east.
In the centuries after 1200?BC, the Phoenicians were the major naval and trading power of the region. Phoenician trade was founded on the Tyrian purple dye, a violet-purple dye derived from the hypobranchial gland of the Murex sea-snail, once profusely available in coastal waters of the eastern Mediterranean Sea but exploited to local extinction. James B. Pritchard's excavations at Sarepta in present-day Lebanon revealed crushed Murex shells and pottery containers stained with the dye that was being produced at the site. The Phoenicians established a second production center for the dye in Mogador, in present-day Morocco. Brilliant textiles were a part of Phoenician wealth, and Phoenician glass was another export ware.
To Egypt, where grapevines would not grow, the 8th-century Phoenicians sold wine: the wine trade with Egypt is vividly documented by the shipwrecks located in 1997 in the open sea 50 kilometres (30?mi) west of Ascalon.[47] Pottery kilns at Tyre in South Lebanon and Sarepta produced the large terracotta jars used for transporting wine. From Egypt, the Phoenicians bought Nubian gold. Additionally, great cedar logs were traded with lumber-poor Egypt for significant sums. Sometime between 1075 and 1060 BC an Egyptian envoy by the name of Wen-Amon visited Phoenicia and secured seven great cedar logs in exchange for a mixed cargo including "4 crocks and 1 kak-men of gold; 5 silver jugs; 10 garments of royal linen; 10 kherd of good linen from Upper Egypt; 500 rolls of finished papyrus; 500 cows' hides; 500 ropes; 20 bags of lentils and 30 baskets of fish." Those logs were then moved by ship from Phoenicia to Egypt.[48]
From elsewhere, they obtained other materials, perhaps the most important being silver from (at least) Sardinia and the Iberian Peninsula. Tin was required which, when smelted with copper from Cyprus, created the durable metal alloy bronze. The archaeologist Glenn Markoe suggests that tin "may have been acquired from Galicia by way of the Atlantic coast or southern Spain; alternatively, it may have come from northern Europe (Cornwall or Brittany) via the Rhone valley and coastal Massalia".[49] Strabo states that there was a highly lucrative Phoenician trade with Britain for tin via the Cassiterides whose location is unknown but may have been off the northwest coast of the Iberian Peninsula.[50] Professor Timothy Champion, discussing Diodorus Siculus's comments on the tin trade, states that "Diodorus never actually says that the Phoenicians sailed to Cornwall. In fact, he says quite the opposite: the production of Cornish tin was in the hands of the natives of Cornwall, and its transport to the Mediterranean was organised by local merchants, by sea and then over land through France, well outside Phoenician control."[51]
Tarshish (Hebrew: ????????????) occurs in the Hebrew Bible with several uncertain meanings, and one of the most recurring is that Tarshish is a place, probably a city or country, that is far from the Land of Israel by sea where trade occurs with Israel and Phoenicia. It was a place where Phoenicians reportedly obtained different metals, particularly silver, during the reign of Solomon. The Septuagint, the Vulgate and the Targum of Jonathan render Tarshish as Carthage, but other biblical commentators read it as Tartessos perhaps in ancient Hispania (Iberian Peninsula). William F. Albright (1941) and Frank M. Cross (1972)[52][53] suggested Tarshish might be or was Sardinia because of the discovery of the Nora Stone and Nora Fragment, the former of which mentions Tarshish in its Phoenician inscription. Christine M. Thompson (2003)[54] identified a concentration of hacksilver hoards dating between c.?1200 and 586 BC in Cisjordan Corpus. This silver-dominant Cisjordan Corpus is unparalleled in the contemporary Mediterranean, and within it occurs a unique concentration in Phoenicia of silver hoards dated between 1200 and 800 BC. Hacksilber objects in these Phoenician hoards have lead isotope ratios that match ores in Sardinia and Spain.[38] This metallic evidence agrees with the biblical memory of a western Mediterranean Tarshish that supplied Solomon with silver via Phoenicia. Assyrian records indicate Tarshish was an island, and the poetic construction of Psalm 72 points to its identity as a large island in the west the island of Sardinia.[38]
The Phoenicians established commercial outposts throughout the Mediterranean, the most strategically important being Carthage in Northwest Africa, southeast of Sardinia on the peninsula of present day Tunisia. Ancient Gaelic mythologies attribute a Phoenician/Scythian influx to Ireland by a leader called Fenius Farsa. Others also sailed south along the coast of Africa. A Carthaginian expedition led by Hanno the Navigator explored and colonized the Atlantic coast of Africa as far as the Gulf of Guinea; and according to Herodotus, a Phoenician expedition sent down the Red Sea by pharaoh Necho II of Egypt (c.?600?BC) even circumnavigated Africa and returned through the Pillars of Hercules after three years. Using gold obtained by expansion of the African coastal trade following the Hanno expedition, Carthage minted gold staters in 350?BC bearing a pattern, in the reverse exergue of the coins, which Mark McMenamin has controversially argued could be interpreted as a map. According to McMenamin, the Mediterranean is represented as a rectangle in the centre, a triangle to the right represents India in the east, and an irregular shape on the left represents America to the west.[55][56]
In the 2nd millennium BC, the Phoenicians traded with the Somalis. Through the Somali city-states of Mosylon, Opone, Malao, Sarapion, Mundus and Tabae, trade flourished.
The Greeks had two names for Phoenician ships: hippoi and galloi. Galloi means tubs and hippoi means horses. These names are readily explained by depictions of Phoenician ships in the palaces of Assyrian kings from the 7th and 8th centuries, as the ships in these images are tub shaped (galloi) and have horse heads on the ends of them (hippoi). It is possible that these hippoi come from Phoenician connections with the Greek god Poseidon equated with the Semitic God "Yam".
In 2014, a Phoenician trading ship, dating to 700 BC, was found near Gozo island. The vessel was about 50 feet long, which contained 50 amphorae full of wine and oil.[57]
The Tel Balawat gates (850?BC) are found in the palace of Shalmaneser III, an Assyrian king, near Nimrud. They are made of bronze, and they portray ships coming to honor Shalmaneser.[58][59]
The Khorsabad bas-relief (7th century BC) shows the transportation of timber (most likely cedar) from Lebanon. It is found in the palace built specifically for Sargon II, another Assyrian king, at Khorsabad, now northern Iraq.[60]
From the 10th century BC, the Phoenicians' expansive culture led them to establish cities and colonies throughout the Mediterranean. Canaanite deities like Baal and Astarte were being worshipped from Cyprus to Sardinia, Malta, Sicily, Spain, Portugal, and most notably at Carthage (Qart Hada?t) in modern Tunisia.
Modern Algeria
Cyprus
Modern Italy
Modern Libya
The islands of Malta
Modern Mauritania
Modern Portugal
Modern Spain
Modern Tunisia
Modern Turkey
Modern Morocco
Other colonies
The Phoenician alphabet was one of the first (consonantal) alphabets with a strict and consistent form. It is assumed that it adopted its simplified linear characters from an as-yet unattested early pictorial Semitic alphabet developed some centuries earlier in the southern Levant.[70][71] It is likely that the precursor to the Phoenician alphabet was of Egyptian origin, since Middle Bronze Age alphabets from the southern Levant resemble Egyptian hieroglyphs or an early alphabetic writing system found at Wadi-el-Hol in central Egypt.[72][73] In addition to being preceded by proto-Canaanite, the Phoenician alphabet was also preceded by an alphabetic script of Mesopotamian origin called Ugaritic. The development of the Phoenician alphabet from the Proto-Canaanite coincided with the rise of the Iron Age in the 11th century BC.[74]
This alphabet has been termed an abjad that is, a script that contains no vowels from the first four letters aleph, beth, gimel, and daleth.
The oldest known representation of the Phoenician alphabet is inscribed on the sarcophagus of King Ahiram of Byblos, dating to the 11th century BC at the latest. Phoenician inscriptions are found in Lebanon, Syria, Israel, Cyprus and other locations, as late as the early centuries of the Christian Era. The Phoenicians are credited with spreading the Phoenician alphabet throughout the Mediterranean world.[75] Phoenician traders disseminated this writing system along Aegean trade routes, to Crete and Greece. The Greeks adopted the majority of these letters but changed some of them to vowels which were significant in their language, giving rise to the first true alphabet.
The Phoenician language is classified in the Canaanite subgroup of Northwest Semitic. Its later descendant in Northwest Africa is termed Punic. In Phoenician colonies around the western Mediterranean, beginning in the 9th century BC, Phoenician evolved into Punic. Punic Phoenician was still spoken in the 5th century AD: St. Augustine, for example, grew up in Northwest Africa and was familiar with the language.
Phoenician art lacks unique characteristics that might distinguish it from its contemporaries. This is due to its being highly influenced by foreign artistic cultures: primarily Egypt, Greece and Assyria. Phoenicians who were taught on the banks of the Nile and the Euphrates gained a wide artistic experience and finally came to create their own art, which was an amalgam of foreign models and perspectives.[76] In an article from The New York Times published on January 5, 1879, Phoenician art was described by the following:
He entered into other men's labors and made most of his heritage. The Sphinx of Egypt became Asiatic, and its new form was transplanted to Nineveh on the one side and to Greece on the other. The rosettes and other patterns of the Babylonian cylinders were introduced into the handiwork of Phoenicia, and so passed on to the West, while the hero of the ancient Chaldean epic became first the Tyrian Melkarth, and then the Herakles of Hellas.
The religious practices and beliefs of Phoenicia were cognate generally to their neighbours in Canaan, which in turn shared characteristics common throughout the ancient Semitic world.[77][78][79] "Canaanite religion was more of a public institution than of an individual experience." Its rites were primarily for city-state purposes; payment of taxes by citizens was considered in the category of religious sacrifices.[80] Unfortunately, many of the Phoenician sacred writings known to the ancients have been lost.[81][82]
Phoenicians were known for being very religious. While there remain favourable aspects regarding Canaanite religion,[83][84][85] several of its reported practices have been widely criticized, in particular, temple prostitution,[86] and child sacrifice.[87] "Tophets" built "to burn their sons and their daughters in the fire" are condemned by God in Jeremiah 7:30-32, and in 2nd Kings 23:10 (also 17:17). Notwithstanding these and other important differences, cultural religious similarities between the ancient Hebrews and the Phoenicians persisted.[83][88]
Canaanite religious mythology does not appear as elaborated compared with existent literature of their cousin Semites in Mesopotamia. In Canaan the supreme god was called El (????, "god").[89][90] The son of El was Baal (??????, "master", "lord"), a powerful dying-and-rising storm god.[91] Other gods were called by royal titles, as in Melqart meaning "king of the city",[92] or Adonis for "lord".[93] (Such epithets may often have been merely local titles for the same deities.) On the other hand, the Phoenicians, notorious for being secretive in business, might use these non-descript words as cover for the secluded name of the god,[94] known only to a select few initiated into the inmost circle, or not even used by them, much as their neighbors and close relatives the ancient Israelites/Judeans sometimes used the honorific Adonai (Heb: "My Lord") in place of the tetragrammatona practice which became standard (if not mandatory) in the Second Temple period onward.[95]
The Semitic pantheon was well-populated; which god became primary evidently depended on the exigencies of a particular city-state or tribal locale.[96][97] Due perhaps to the leading role of the city-state of Tyre, its reigning god Melqart was prominent throughout Phoenicia and overseas. Also of great general interest was Astarte (??????????)a form of the Babylonian Ishtara fertility goddess who also enjoyed regal and matronly aspects. The prominent deity Eshmun of Sidon was a healing god, seemingly cognate with deities such as Adonis (possibly a local variant of the same) and Attis. Associated with the fertility and harvest myth widespread in the region, in this regard Eshmun was linked with Astarte; other like pairings included Ishtar and Tammuz in Babylon, and Isis and Osiris in Egypt.[98]
Religious institutions of great antiquity in Tyre, called marzeh (????????, "place of reunion"), did much to foster social bonding and "kin" loyalty.[99] These institutions held banquets for their membership on festival days. Various marzeh societies developed into elite fraternities, becoming very influential in the commercial trade and governance of Tyre. As now understood, each marzeh originated in the congeniality inspired and then nurtured by a series of ritual meals, shared together as trusted "kin", all held in honor of the deified ancestors.[100] Later, at the Punic city-state of Carthage, the "citizen body was divided into groups which met at times for common feasts." Such festival groups may also have composed the voting cohort for selecting members of the city-state's Assembly.[101][102]
Religion in Carthage was based on inherited Phoenician ways of devotion. In fact, until its fall embassies from Carthage would regularly make the journey to Tyre to worship Melqart, bringing material offerings.[103][104] Transplanted to distant Carthage, these Phoenician ways persisted, but naturally acquired distinctive traits: perhaps influenced by a spiritual and cultural evolution, or synthesizing Berber tribal practices, or transforming under the stress of political and economic forces encountered by the city-state. Over time the original Phoenician exemplar developed distinctly, becoming the Punic religion at Carthage.[105] "The Carthaginians were notorious in antiquity for the intensity of their religious beliefs."[106] "Besides their reputation as merchants, the Carthaginians were known in the ancient world for their superstition and intense religiosity. They imagined themselves living in a world inhabited by supernatural powers which were mostly malevolent. For protection they carried amulets of various origins and had them buried with them when they died."[107]
At Carthage, as at Tyre, religion was integral to the city's life. A committee of ten elders selected by the civil authorities regulated worship and built the temples with public funds. Some priesthoods were hereditary to certain families. Punic inscriptions list a hierarchy of cohen (priest) and rab cohenim (lord priests). Each temple was under the supervision of its chief priest or priestess. To enter the Temple of Eshmun one had to abstain from sexual intercourse for three days, and from eating beans and pork.[108] Private citizens also nurtured their own destiny, as evidenced by the common use of theophoric personal names, e.g., Hasdrubal, "he who has Baal's help" and Hamilcar [Abdelmelqart], "pledged to the service of Melqart".[109]
The city's legendary founder, Elissa or Dido, was the widow of Acharbas the high priest of Tyre in service to its principal deity Melqart.[110] Dido was also attached to the fertility goddess Astarte. With her Dido brought not only ritual implements for the worship of Astarte, but also her priests and sacred prostitutes (taken from Cyprus).[111] The agricultural turned healing god Eshmun was worshipped at Carthage, as were other deities. Melqart became supplanted at the Punic city-state by the emergent god Baal Hammon, which perhaps means "lord of the altars of incense" (thought to be an epithet to cloak the god's real name).[105][112] Later, another newly arisen deity arose eventually to reign supreme at Carthage, a goddess of agriculture and generation who manifested a regal majesty, Tanit.[113]
The name Baal Hammon (?????? ??????) has attracted scholarly interest, with most scholars viewing it as a probable derivation from the Northwest Semitic ?ammn ("brazier"), suggesting the meaning "Lord of the Brazier". This may be supported by incense burners and braziers found depicting the god. Frank Moore Cross argued for a connection to Hamn, the Ugaritic name for Mt. Amanus, an ancient name for the Nur Mountain range.[114] Modern scholars at first associated Baal Hammon with the Egyptian god Ammon of Thebes, both the Punic and the Egyptian being gods of the sun. Both also had the ram as a symbol. The Egyptian Ammon was known to have spread by trade routes to Libyans in the vicinity of modern Tunisia, well before arrival of the Phoenicians. Yet Baal Hammon's derivation from Ammon is no longer considered the most likely, as Baal Hammon has since been traced to Syrio-Phoenician origins, confirmed by recent finds at Tyre.[115] Baal Hammon is also presented as a god of agriculture: "Baal Hammon's power over the land and its fertility rendered him of great appeal to the inhabitants of Tunisia, a land of fertile wheat- and fruit-bearing plains."[116][117]
"In Semitic religion El, the father of the gods, had gradually been shorn of his power by his sons and relegated to a remote part of his heavenly home; in Carthage, on the other hand, he became, once more, the head of the pantheon, under the enigmatic title of Ba'al Hammon."
Prayers of individual Carthaginians were often addressed to Baal Hammon. Offerings to Hammon also evidently included child sacrifice.[118][119][120] Diodorus (late 1st century BC) wrote that when Agathocles had attacked Carthage (in 310) several hundred children of leading families were sacrificed to regain the god's favour.[121] In modern times, the French novelist Gustave Flaubert's 1862 work Salammb? graphically featured this god as accepting such sacrifice.[122]
The goddess Tanit during the 5th and 4th centuries became queen goddess, supreme over the city-state of Carthage, thus outshining the former chief god and her associate, Baal-Hammon.[124][125] Tanit was represented by "palm trees weighed down with dates, ripe pomegranates ready to burst, lotus or lilies coming into flower, fish, doves, frogs... ." She gave to mankind a flow of vital energies.[126][127] Tanit may be Berbero-Libyan in origin, or at least assimilated to a local deity.[128][129]
Another view, supported by recent finds, holds that Tanit originated in Phoenicia, being closely linked there to the goddess Astarte.[130][131] Tanit and Astarte: each one was both a funerary and a fertility goddess. Each was a sea goddess. As Tanit was associated with Ba'al Hammon the principal god in Punic Carthage, so Astarte was with El in Phoenicia. Yet Tanit was clearly distinguished from Astarte. Astarte's heavenly emblem was the planet Venus, Tanit's the crescent moon. Tanit was portrayed as chaste; at Carthage religious prostitution was apparently not practiced.[132][133] Yet temple prostitution played an important role in Astarte's cult at Phoenicia. Also, the Greeks and Romans did not compare Tanit to the Greek Aphrodite nor to the Roman Venus as they would Astarte. Rather the comparison of Tanit would be to Hera and to Juno, regal goddesses of marriage, or to the goddess Artemis of child-birth and the hunt.[134] Tertullian (c. 160 ÿ c.220), the Christian theologian and native of Carthage, wrote comparing Tanit to Ceres, the Roman mother goddess of agriculture.[135]
Tanit has also been identified with three different Canaanite goddesses (all being sisters/wives of El): the above 'Astarte; the virgin war goddess 'Anat; and the mother goddess 'Elat or Asherah.[136][137][138] With her being a goddess, or symbolizing a psychic archetype, accordingly it is difficult to assign a single nature to Tanit, or clearly to represent her to consciousness.[139]
A problematic theory derived from sociology of religion proposes that as Carthage passed from being a Phoenician trading station into a wealthy and sovereign city-state, and from a monarchy anchored to Tyre into a native-born Libyphoenician oligarchy, Carthaginians began to turn away from deities associated with Phoenicia, and slowly to discover or synthesize a Punic deity, the goddess Tanit.[140] A parallel theory posits that when Carthage acquired as a source of wealth substantial agricultural lands in Africa, a local fertility goddess, Tanit, developed or evolved eventually to become supreme.[107] A basis for such theories may well be the religious reform movement that emerged and prevailed at Carthage during the years 397-360. The catalyst for such dramatic change in Punic religious practice was their recent defeat in war when led by their king Himilco (d. 396) against the Greeks of Sicily.[141]
Such transformation of religion would have been instigated by a faction of wealthy land owners at Carthage, including these reforms: overthrow of the monarchy; elevation of Tanit as queen goddess and decline of Baal Hammon; allowance of foreign cults of Greek origin into the city (Demeter and Kore); decline in child sacrifice, with most votive victims changed to small animals, and with the sacrifice not directed for state purposes but, when infrequently done, performed to solicit the deity for private, family favors. This bold historical interpretation understands the reformer's motivation as "the reaction of a wealthy and cultured upper class against the primitive and antiquated aspects of the Canaanite religion, and also a political move intended to break the power of a monarchy which ruled by divine authority." The reform's popularity was precarious at first. Later, when the city was in danger of imminent attack in 310, there would be a marked regression to child sacrifice. Yet eventually the cosmopolitan religious reform and the popular worship of Tanit together contributed to "breaking through the wall of isolation which had surrounded Carthage."[142][143][144]
"When the Romans conquered Africa, Carthaginian religion was deeply entrenched even in Libyan areas, and it retained a great deal of its character under different forms." Tanit became Juno Caelestis, "and Caelestis was supreme at Carthage itself until the triumph of Christianity, just as Tanit had been in pre-Roman times." [128] Regarding Berber (Libyan) religious beliefs, it has also been said:
"[Berber] belief in the powers of the spirits of the ancestors was not eclipsed by the introduction of new gods--Hammon, or Tanit--but existed in parallel with them. It is this same duality, or readiness to adopt new cultural forms while retaining the old on a more intimate level, which characterizes the [Roman era]."[145]
Such Berber ambivalence, the ability to entertain multiple mysteries concurrently, apparently characterized their religion during the Punic era also. After the passing of Punic power, the great Berber king Masinissa (r. 202ÿ148), who long fought and challenged Carthage, was widely venerated by later generations of Berbers as divine.[146]
Phoenician culture had a huge effect upon the cultures of the Mediterranean basin in the early Iron Age, and had been affected by them in turn. For example, in Phoenicia, the tripartite division between Baal, Mot and Yam seems to have influenced the Greek division between Zeus, Hades and Poseidon.[147] The Tartessos region probably embraced the whole southern part of the Iberian Peninsula.[148] Strab. 3.2.11). In various Mediterranean ports during the classical period, Phoenician temples sacred to Melkart were recognized as sacred to Greek Hercules. Stories like the Rape of Europa, and the coming of Cadmus also draw upon Phoenician influence.
The recovery of the Mediterranean economy after the late Bronze Age collapse (c.?1200 BC) seems to have been largely due to the work of Phoenician traders and merchant princes, who re-established long distance trade between Egypt and Mesopotamia in the 10th century BC.
There are many countries and cities around the Mediterranean region that derive their names from the Phoenician Language. Below is a list with the respective meanings:
Towards the end of the Bronze Age (around 1200?BC) there was trade between the Canaanites (early Phoenicians), Egypt, Cyprus, and Greece. In a shipwreck found off of the coast of Turkey (the Ulu Bulurun wreck), Canaanite storage pottery along with pottery from Cyprus and Greece was found. The Phoenicians were famous metalworkers, and by the end of the 8th century BC, Greek city-states were sending out envoys to the Levant (the eastern Mediterranean) for metal goods.[149]
The height of Phoenician trade was circa the 7th and 8th centuries BC. There is a dispersal of imports (ceramic, stone, and faience) from the Levant that traces a Phoenician commercial channel to the Greek mainland via the central Aegean.[149] Athens shows little evidence of this trade with few eastern imports, but other Greek coastal cities are rich with eastern imports that evidence this trade.[150]
Al Mina is a specific example of the trade that took place between the Greeks and the Phoenicians.[151] It has been theorized that by the 8th century BC, Euboean traders established a commercial enterprise with the Levantine coast and were using Al Mina (in Syria) as a base for this enterprise. There is still some question about the veracity of these claims concerning Al Mina.[150] The Phoenicians even got their name from the Greeks due to their trade. Their most famous trading product was purple dye, the Greek word for which is phoenos.[152]
The Phoenician phonetic alphabet was adopted and modified by the Greeks probably in the 8th century BC (around the time of the hippoi depictions). This most likely did not come from a single instance but from a culmination of commercial exchange.[152] This means that before the 8th century, there was a relationship between the Greeks and the Phoenicians. Though there is no evidence to support the suggestion, it is probable that during this period there was also a passing of religious ideas.[citation needed] The legendary Phoenician hero Cadmus is credited with bringing the alphabet to Greece, but it is more plausible that it was brought by Phoenician emigrants to Crete,[153] whence it gradually diffused northwards.
In both Phoenician and Greek mythologies, Cadmus is a Phoenician prince, the son of Agenor, the king of Tyre in South Lebanon. Herodotus credits Cadmus for bringing the Phoenician alphabet to Greece[154] approximately sixteen hundred years before Herodotus' time, or around 2000 BC,[155] as he attested:
These Phoenicians who came with Cadmus and of whom the Gephyraeans were a part brought with them to Hellas, among many other kinds of learning, the alphabet, which had been unknown before this, I think, to the Greeks. As time went on the sound and the form of the letters were changed.
Due to the number of deities similar to the "Lord of the Sea" in classical mythology, there have been many difficulties attributing one specific name to the sea deity or the "PoseidonÿNeptune" figure of Phoenician religion. This figure of "Poseidon-Neptune" is mentioned by authors and in various inscriptions as being very important to merchants and sailors,[156] but a singular name has yet to be found. There are, however, names for sea gods from individual city-states. Yamm is the god of the sea of Ugarit, an ancient city-state north to Phoenicia. Yamm and Baal, the storm god of Ugaritic myth and often associated with Zeus, have an epic battle for power over the universe. While Yamm is the god of the sea, he truly represents vast chaos.[157] Baal, on the other hand, is a representative for order. In Ugaritic myth, Baal overcomes Yamm's power. In some versions of this myth, Baal kills Yamm with a mace fashioned for him, and in others, the goddess Athtart saves Yamm and says that since defeated, he should stay in his own province. Yamm is the brother of the god of death, Mot.[158] Some scholars have identified Yamm with Poseidon, although he has also been identified with Pontus.[159]
In his Republic, Greek philosopher Plato contends that the love of money is a tendency of the soul found amongst Phoenicians and Egyptians, which distinguishes them from the Greeks who tend towards the love of knowledge.[160] In his Laws, he asserts that this love of money has led the Phoenicians and Egyptians to develop skills in cunning and trickery (ϫ?ϫ) rather than wisdom (?ϫ).[161]
In his Histories, Herodotus gives the Persian and Greek accounts of a series of kidnappings that led to the Trojan War. While docked at a trading port in Argos, the Phoenicians kidnapped a group of Greek women including King Idacus's daughter, Io. The Greeks then retaliated by kidnapping Europa, a Phoenician, and later Medea. The Greeks refused to compensate the Phoenicians for the additional abduction, a fact which Paris used a generation later to justify the abduction of Helen from Argos. The Greeks then retaliated by waging war against Troy. After Troy's fall the Persians considered the Greeks to be their enemy.[162]
Hiram (also spelled Huran), the king of Tyre, is associated with the building of Solomon's temple.
1 Kings 5:1 says: "Hiram king of Tyre sent his servants to Solomon; for he had heard that they had anointed him king in the place of his father: for Hiram was ever a lover of David." 2 Chronicles 2:14 says: "The son of a woman of the daughters of Dan, and his father [was] a man of Tyre, skillful to work in gold, silver, brass, iron, stone, timber, royal purple (from the Murex), blue, and in crimson, and fine linens; also to grave any manner of graving, and to find out every device which shall be put to him?..."
This is the architect of the Temple, Hiram Abiff of Masonic lore.
Later, reforming prophets railed against the practice of drawing royal wives from among foreigners: Elijah execrated Jezebel, the princess from Tyre in South Lebanon who became a consort of King Ahab and introduced the worship of her god Baal.
Long after Phoenician culture flourished, or Phoenicia existed as a political entity, Hellenized natives of the region where Canaanites still lived were referred to as "Syro-Phoenicians", as in the Gospel of Mark 7:26: "The woman was a Greek, a Syro-phoenician by birth".
The word Bible itself derives from Greek biblion, which means "book" and either derives from, or is the (perhaps ultimately Egyptian) origin of Byblos, the Greek name of the Phoenician city Gebal.[163]
The legacies of the Phoenicians include:
"Thus the winged figure of Tanith, the Carthaginian goddess of heaven, standing beneath the vault of heaven and the zodiac, holds the sun and moon in her hands, and is [flanked] by pillars, the symbols of the Great Mother Goddess. But on the lower plane of the stele, we find the same goddess stylized with upraised arms, possibly as a tree assimilated to the Egyptian life symbol. Her head is the sun, an illusion to the tree birth of the sun, and she is accompanied by two doves, the typical bird of the Great Goddess." The "Egyptian life symbol" refers to the ankh.
"It seems probable, therefore, that Tanith was a pre-Phoenician goddess of fertility of the Hamites, ...that she was so popular that after the coming of the Phoenicians they too worshipped her to such a degree that she largely displaced their native goddess Astart."Barton (1934), p.?305
"In Ugaritic mythology, Anath is by far the most important female figure, the goddess of love and war, virginal yet wanton, amorous yet given to uncontrollable outbursts of rage and appalling acts of cruelty. She is the daughter of El, the god of heaven, and of his wife the Lady Asherah of the Sea. ... Her foremost lover was her brother Baal. ... She was easily provoked to violence and, once she began to fight, would go berserk, smiting and killing left and right." (60-2), who adds that the Phoenician Philo of Byblos (64ÿ141) compared Anath to the Greek virgin war goddess Athena. Also, Patai at 63-6 identifies Anath with the biblical "Queen of Heaven". At 61 Patai, referring to Anath in her r?le as goddess of love, mentions the Babylonian goddess Ishtar, and remarks that both Astarte and Anath as "typical goddesses of love, both chaste and promiscuous... [were] perennially fruitful without ever losing their virginity."
"Asherah was the chief goddess of the Canaanite pantheon... at Ugarit... . ...Asherah figured prominently as the wife of El the chief god. Her full name was 'Lady Asherah of the Sea'--apparently her domain proper was the sea, just as that of her husband El was heaven. She was, however, also referred to simply as Elath or Goddess. She was the 'Progenitress of the Gods': all other gods... were her children... . Asherah was a motherly goddess... ." Patai (1990), pp.?36-7. In his chapter "The Goddess Asherah" (34-53), Patai discusses widespread Hebrew worship of Asherah until the 6th century B.C.E. Patai (52ÿ3) notes ancient inscriptions (one found near Hebron) evidencing an early Jewish association of Asherah with Yahweh, a view repugnant to later orthodox Judaism.
It is an "illusion that an archetype can be finally explained and disposed of. Even the best attempts at explanation are only more or less successful translations into another metaphorical language. ... The most we can do is dream the myth onwards and give it a modern dress. And whatever [our] explanation or interpretation does to it, we do to our souls as well, with corresponding results for our own well being. ... Hence the "explanation" should always be such that the functional significance of the archetype remains unimpaired, so that an adequate and meaningful connection between the conscious mind and the archetype is assured. ... It represents or personifies certain instinctive data of the dark, primitive psyche, the real but invisible roots of consciousness." ... "The archetype... is a psychic organ present in all of us. ... There is no 'rational' substitute for the archetype any more than there is for the cerebellum or the kidneys."
What type of animal is a bush baby?
primates🚨?Otolemur
?Euoticus
?Galago
?Sciurocheirus
?Galagoides
Galagos /??le?o?z/, also known as bushbabies, bush babies, or nagapies (meaning "little night monkeys" in Afrikaans), are small nocturnal[2] primates native to continental Africa, and make up the family Galagidae (also sometimes called Galagonidae). They are sometimes included as a subfamily within the Lorisidae or Loridae.
According to some accounts, the name "bushbaby" comes from either the animal's cries or its appearance. The Afrikaans name nagapie is because they are almost exclusively seen at night, while the Ghanaian name aposor is given to them because of their firm grip on branches.[citation needed]
In both variety and abundance, the bushbabies are the most successful strepsirrhine primates in Africa, according to the African Wildlife Foundation.[3]
Galagos have large eyes that give them good night vision in addition to other characteristics, like strong hind limbs, acute hearing, and long tails that help them balance. Their ears are bat-like and allow them to track insects in the dark. They catch insects on the ground or snatch them out of the air. They are fast, agile creatures. As they bound through the thick bushes, they fold their delicate ears back to protect them. They also fold them during rest.[3] They have nails on most of their digits, except for the second toe of the hind foot, which bears a grooming claw. Their diet is a mixture of insects and other small animals, fruit, and tree gums.[4] They have pectinate (comb-like) incisors called toothcombs, and the dental formula: 2.1.3.32.1.3.3
After a gestation period of 110ÿ133 days, young galagos are born with half-closed eyes and are initially unable to move about independently. After a few (6ÿ8) days, the mother carries the infant in her mouth, and places it on branches while feeding. Females may have singles, twins, or triplets, and may become very aggressive. Each newborn weighs less than half an ounce. For the first three days, the infant is kept in constant contact with the mother. The young are fed by the mother for six weeks and can feed themselves at two months. The young grow rapidly, often causing the mother to walk awkwardly as she transports them.[3]
Females maintain a territory, but share them with their offspring. Males leave their mothers' territories after puberty, but females remain, forming social groups consisting of closely related females and their young. Adult males maintain separate territories, which overlap with those of the female social groups; generally, one adult male mates with all the females in an area. Males that have not established such territories sometimes form small bachelor groups.[4]
While keeping them as pets is not advised (like many other nonhuman primates, they are considered likely sources of diseases that can cross species barriers), it is certainly done. Equally, they are highly likely to attract attention from customs officials on importation into many countries. Reports from veterinary and zoological sources indicate captive lifetimes of 12.0 to 16.5 years, suggesting a natural lifetime over a decade.[5]
Galagos communicate both by calling to each other, and by marking their paths with urine. By following the scent of urine, they can land on exactly the same branch every time.[3] All species of galago produce species-specific 'loud calls' or 'advertisement calls'. These calls have multiple different functions. One function is long-distance identification and differentiation of individual species, and scientists are now able to recognize all known galago species by their 'loud calls'.[6] At the end of the night, group members use a special rallying call and gather to sleep in a nest made of leaves, a group of branches, or a hole in a tree.
Galagos have remarkable jumping abilities. The highest reliably reported jump for a galago is 2.25 m. According to a study published by the Royal Society, given the body mass of each animal and the fact that the leg muscles amount to about 25% of this, galago's jumping muscles should perform six to nine times better than those of a frog.[7] This is thought to be due to elastic energy storage in tendons of the lower leg, allowing far greater jumps than would otherwise be possible for an animal of their size.[7] In mid-flight, they tuck their arms and legs close to the body; they are then brought out at the last second to grab the branch. In a series of leaps, a galago can cover ten yards in mere seconds. The tail, which is longer than the length of the head and body combined, assists the powerful leg muscles in powering the jumps. They may also hop like a kangaroo or simply run/walk on four legs.[3] Such strong, complicated, and coordinated movements are due to the rostral half of the posterior parietal cortex that is linked to the motor, premotor, and visuomotor areas of the frontal cortex.[8]
Generally, the social structure of the galago has components of both social life and solitary life. This can be seen in their play. They swing off branches or climb high and throw things. Social play includes play fights, play grooming, and following-play. When following-play, two galagos jump sporadically and chase each other through the trees. The older galagos in a group prefer to rest alone, while younger ones are in constant contact with one another.[9] This is observed in the Galago garnetti species. Mothers often leave infants alone for long periods of time and do not attempt to stop infants from leaving them. On the opposite hand, the offspring tries to stay close to the mother and initiates actions of maintaining close proximity and activating social interactions with the mother.[10]
Grooming is a very important part of galago daily life. They often autogroom before, during, and after rest. Social grooming is performed more often by males in the group. Females often reject the attempts made by the males to groom them.[9]
Galagos are currently grouped into three genera, with the two former members of the now defunct genus Galagoides returned to their original genus Galago:[1]
Family Galagidae - galagos, or bushbabies
A low-coverage genomic sequence of the northern greater galago, O. garnettii, is in progress. As it is a 'primitive' primate, the sequence will be particularly useful in bridging the sequences of higher primates (macaque, chimpanzee, human) to close nonprimates, such as rodents. The two-time planned coverage will not be sufficient to create a full genome assembly, but will provide comparative data across most of the human assembly.[citation needed]
When was land reform programme introduced in india?
after 1947.🚨Land Reform refers to efforts to reform the ownership and regulation of land in India.
Land title formalisation has been part of Indias state policy from the very beginning.[1] Independent Indias most revolutionary land policy was perhaps the abolition of the Zamindari system (feudal land holding practices). Land-reform policy in India had two specific objectives: "The first is to remove such impediments to increase in agricultural production as arise from the agrarian structure inherited from the pastThe second object, which is closely related to the first, is to eliminate all elements of exploitation and social injustice within the agrarian system, to provide security for the tiller of soil and assure equality of status and opportunity to all sections of the rural population. (Government of India 1961 as quoted by Appu 1996[2])
There are four main categories of reforms:
Since its independence in 1947, there has been voluntary and state initiated/mediated land reforms in several states[4][5] with dual objective of efficient use of land [3] and ensuring social justice.[6][7] The most notable and successful example of land reforms are in the states of West Bengal and Kerala. Other than these state sponsored attempts of reforming land ownership and control, there was another attempt to bring changes in the regime which achieved limited success; famously known as Bhoodan movement (Government of India, Ministry of Rural Development 2003, Annex XXXIX). Some other research has shown that during the movement, in Vidarbha region, 14 percent of the land records are incomplete, thus prohibiting transfer to the poor. 24 percent of the land promised had never actually become part of the movement. The Gramdan which arguably took place in 160,000 pockets did not legalise the process under the state laws (Committee on Land Reform 2009, 77, Ministry of Rural Development).
After promising land reforms and elected to power in West Bengal in 1977, the Communist Party of India (Marxist) (CPI(M)) kept their word and initiated gradual land reforms, such as Operation Barga. The result was a more equitable distribution of land among the landless farmers, and enumeration of landless farmers. This has ensured an almost lifelong loyalty from the farmers and the communists were in power till 2011 assembly election.[8]
In land reform in Kerala, the only other large state where the CPI(M) came to power, state administrations have actually carried out the most extensive land, tenancy and agrarian labour wage reforms in the non-socialist late-industrialising world.[9] Another successful land reform program was launched in Jammu and Kashmir after 1947.
All in all, land reforms have been successful only in pockets of the country, as people have often found loopholes in the laws that set limits on the maximum area of land that is allowed to be held by any one person.[6][10][11][12]
The following table shows land ceilings for each state in India.
Where was fifty shades of grey first published?
The Writers' Coffee Shop, a virtual publisher based in Australia🚨
Fifty Shades of Grey is a 2011 erotic romance novel by British author E. L. James.[1] It is the first instalment in the Fifty Shades trilogy that traces the deepening relationship between a college graduate, Anastasia Steele, and a young business magnate, Christian Grey. It is notable for its explicitly erotic scenes featuring elements of sexual practices involving bondage/discipline, dominance/submission, and sadism/masochism (BDSM). Originally self-published as an ebook and a print-on-demand, publishing rights were acquired by Vintage Books in March 2012.
Fifty Shades of Grey has topped best-seller lists around the world, selling over 125 million copies worldwide by June 2015. It has been translated into 52 languages, and set a record in the United Kingdom as the fastest-selling paperback of all time. Critical reception of the book, however, has tended towards the negative, with the quality of its prose generally seen as poor. Universal Pictures and Focus Features produced a film adaptation, which was released on 13 February 2015[2] and also received generally unfavourable reviews.
The second and third volumes of the trilogy, Fifty Shades Darker and Fifty Shades Freed, were published in 2012. Grey: Fifty Shades of Grey as Told by Christian, a version of Fifty Shades of Grey being told from Christian's point of view, was published in June 2015.
Anastasia "Ana" Steele is a 21-year-old college senior attending Washington State University in Vancouver, Washington. Her best friend is Katherine "Kate" Kavanagh, who writes for the college newspaper. Due to an illness, Kate is unable to interview 27-year-old Christian Grey, a successful and wealthy Seattle entrepreneur, and asks Ana to take her place. Ana finds Christian attractive as well as intimidating. As a result, she stumbles through the interview and leaves Christian's office believing it went poorly. Ana does not expect to meet Christian again, but he appears at the hardware store where she works. While he purchases various items including cable ties, masking tape, and rope, Ana informs Christian that Kate would like some photographs to illustrate her article about him. Christian gives Ana his phone number. Later, Kate urges Ana to call Christian and arrange a photo shoot with their photographer friend, Jos Rodriguez.
The next day Jos, Kate, and Ana arrive for the photo shoot at the Heathman Hotel, where Christian is staying. Christian asks Ana out for coffee and asks if she is dating anyone, specifically Jos. Ana replies that she is not dating anyone. During the conversation, Ana learns that Christian is also single, but he says he is not romantic. Ana is intrigued but believes she is not attractive enough for Christian. Later, Ana receives a package from Christian containing first edition copies of Tess of the d'Urbervilles, which stuns her. Later that night, Ana goes out drinking with her friends and ends up drunk dialling Christian, who informs her that he will be coming to pick her up because of her inebriated state. Ana goes outside to get some fresh air, and Jos attempts to kiss her, but he is stopped by Christian's arrival. Ana leaves with Christian, but not before she discovers that Kate has been flirting with Christian's brother, Elliot. Later, Ana wakes to find herself in Christian's hotel room, where he scolds her for not taking proper care of herself. Christian then reveals that he would like to have sex with her. He initially says that Ana will first have to fill in paperwork, but later goes back on this statement after making out with her in the elevator.
Ana goes on a date with Christian, on which he takes her in his helicopter, Charlie Tango, to his apartment. Once there, Christian insists that she sign a non-disclosure agreement forbidding her from discussing anything they do together, which Ana agrees to sign. He also mentions other paperwork, but first takes her to his playroom full of BDSM toys and gear. There, Christian informs her that the second contract will be one of dominance and submission, and there will be no romantic relationship, only a sexual one. The contract even forbids Ana from touching Christian or making eye contact with him. At this point, Christian realises that Ana is a virgin and takes her virginity without making her sign the contract. The following morning, Ana and Christian again have sex. His mother arrives moments after their sexual encounter and is surprised by the meeting, having previously thought Christian was homosexual, because he was never seen with a woman. Christian later takes Ana out to eat, and he reveals that he lost his virginity at age 15 to one of his mother's friends, Elena Lincoln, and that his previous dominant/submissive relationships failed due to incompatibility. Christian also reveals that in his first dominant/submissive relationship he was the submissive. Christian and Ana plan to meet again, and he takes Ana home, where she discovers several job offers and admits to Kate that she and Christian had sex.
Over the next few days, Ana receives several packages from Christian. These include a laptop to enable her to research the BDSM lifestyle in consideration of the contract; to communicate with him, since she has never previously owned a computer; and to receive a more detailed version of the dominant/submissive contract. She and Christian email each other, with Ana teasing him and refusing to honour parts of the contract, such as only eating foods from a specific list. Ana later meets with Christian to discuss the contract and becomes overwhelmed by the potential BDSM arrangement and the potential of having a sexual relationship with Christian that is not romantic in nature. Because of these feelings, Ana runs away from Christian and does not see him again until her college graduation, where he is a guest speaker. During this time, Ana agrees to sign the dominant/submissive contract. Ana and Christian once again meet to further discuss the contract, and they go over Ana's hard and soft limits. Christian spanks Ana for the first time, and the experience leaves her both enticed and slightly confused. This confusion is exacerbated by Christian's lavish gifts and the fact that he brings her to meet his family. The two continue with the arrangement without Ana's having yet signed the contract. After successfully landing a job with Seattle Independent Publishing (SIP), Ana further bristles under the restrictions of the non-disclosure agreement and her complex relationship with Christian. The tension between Ana and Christian eventually comes to a head after Ana asks Christian to punish her in order to show her how extreme a BDSM relationship with him could be. Christian fulfils Ana's request, beating her with a belt, and Ana realises they are incompatible. Devastated, she breaks up with Christian and returns to the apartment she shares with Kate.
The Fifty Shades trilogy was developed from a Twilight fan fiction series originally titled Master of the Universe and published episodically on fan-fiction websites under the pen name "Snowqueen's Icedragon". The piece featured characters named after Stephenie Meyer's characters in Twilight, Edward Cullen and Bella Swan. After comments concerning the sexual nature of the material, James removed the story from the fan-fiction websites and published it on her own website, FiftyShades.com. Later she rewrote Master of the Universe as an original piece, with the principal characters renamed Christian Grey and Anastasia Steele and removed it from her website before publication.[3] Meyer commented on the series, saying "that's really not my genre, not my thing... Good on hershe's doing well. That's great!"[4]
This reworked and extended version of Master of the Universe was split into three parts. The first, titled Fifty Shades of Grey, was released as an e-book and a print on demand paperback in May 2011 by The Writers' Coffee Shop, a virtual publisher based in Australia.[5][6] The second volume, Fifty Shades Darker, was released in September 2011; and the third, Fifty Shades Freed, followed in January 2012. The Writers' Coffee Shop had a restricted marketing budget and relied largely on book blogs for early publicity, but sales of the novel were boosted by word-of-mouth recommendation. The book's erotic nature and perceived demographic of its fan base as being composed largely of married women over thirty led to the book being dubbed "Mommy Porn" by some news agencies.[7][8] The book has also reportedly been popular among teenage girls and college women.[8][9][10] By the release of the final volume in January 2012, news networks in the United States had begun to report on the Fifty Shades trilogy as an example of viral marketing and of the rise in popularity of female erotica, attributing its success to the discreet nature of e-reading devices.[11][12] Due to the heightened interest in the series, the license to the Fifty Shades trilogy was picked up by Vintage Books for re-release in a new and revised edition in April 2012.[13][14] The attention that the series has garnered has also helped to spark a renewed interest in erotic literature. Many other erotic works quickly became best-sellers following Fifty Shade's success, while other popular works, such as Anne Rice's The Sleeping Beauty trilogy, have been reissued (this time without pseudonyms) to meet the higher demand.[15]
On 1 August 2012, Amazon UK announced that it had sold more copies of Fifty Shades of Grey than it had the entire Harry Potter series combined, making E. L. James its best-selling author, replacing J. K. Rowling, though worldwide the Harry Potter series sold more than 450 million copies compared with Fifty Shades of Grey's sales of 60 million copies.[16]
Fifty Shades of Grey has topped best-seller lists around the world, including those of the United Kingdom and the United States.[17][18] The series had sold over 125 million copies worldwide by June 2015 and has been translated into 52 languages,[19][20] and set a record in the United Kingdom as the fastest-selling paperback of all time.[21]
It has received mixed to negative reviews, with most critics noting poor literary qualities of the work. Salman Rushdie said about the book: "I've never read anything so badly written that got published. It made Twilight look like War and Peace."[22] Maureen Dowd described the book in The New York Times as being written "like a Bronte devoid of talent," and said it was "dull and poorly written."[23] Jesse Kornbluth of The Huffington Post said: "As a reading experience, Fifty Shades?... is a sad joke, puny of plot".[24]
Princeton professor April Alliston wrote, "Though no literary masterpiece, Fifty Shades is more than parasitic fan fiction based on the recent Twilight vampire series."[25] Entertainment Weekly writer Lisa Schwarzbaum gave the book a "B+" rating and praised it for being "in a class by itself."[26] British author Jenny Colgan in The Guardian wrote "It is jolly, eminently readable and as sweet and safe as BDSM (bondage, discipline, sadism and masochism) erotica can be without contravening the trade descriptions act" and also praised the book for being "more enjoyable" than other "literary erotic books".[27] The Daily Telegraph noted that the book was "the definition of a page-turner", noting that the book was both "troubling and intriguing".[28] A reviewer for the Ledger-Enquirer described the book as guilty fun and escapism, and that it "also touches on one aspect of female existence [female submission]. And acknowledging that fact ÿ maybe even appreciating it ÿ shouldn't be a cause for guilt."[29] The New Zealand Herald stated that the book "will win no prizes for its prose" and that "there are some exceedingly awful descriptions," although it was also an easy read; "(If you only) can suspend your disbelief and your desire to ÿ if you'll pardon the expression ÿ slap the heroine for having so little self respect, you might enjoy it."[30] The Columbus Dispatch stated that, "Despite the clunky prose, James does cause one to turn the page."[31] Metro News Canada wrote that "suffering through 500 pages of this heroine's inner dialogue was torturous, and not in the intended, sexy kind of way".[32] Jessica Reaves, of the Chicago Tribune, wrote that the "book's source material isn't great literature", noting that the novel is "sprinkled liberally and repeatedly with asinine phrases", and described it as "depressing".[33]
The book garnered some accolades. In December 2012, it won both "Popular Fiction" and "Book of the Year" categories in the UK National Book Awards.[34][35] In that same month, Publishers Weekly named E. L. James the 'Publishing Person of the Year', a decision whose criticism in the LA Times and the New York Daily News was referred to by and summarised in The Christian Science Monitor.[36] Earlier, in April 2012, when E. L. James was listed as one of Time magazine's "100 Most Influential People in the World",[37] Richard Lawson of The Atlantic Wire criticised her inclusion due to the trilogy's fan fiction beginnings.[38]
Fifty Shades of Grey has attracted criticism due to its depictions of BDSM, with some BDSM participants stating that the book confuses BDSM with abuse and presents it as a pathology to be overcome, as well as showing incorrect and possibly dangerous BDSM techniques.[39][40]
Coinciding with the release of the book and its surprising popularity, injuries related to BDSM and sex toy use spiked dramatically. In 2012, the year after the book was published, injuries requiring Emergency Room visits increased by over 50% from 2010 (the year before the book was published). This is speculated to be due to people unfamiliar with both the proper use of these toys and the safe practice of bondage and other "kinky" sexual fetishes attempting what they had read in the book.[41]
There has also been criticism against the fact that BDSM is part of the book. Archbishop Dennis Schnurr of Cincinnati said in an early February 2015 letter, "The story line is presented as a romance; however, the underlying theme is that bondage, dominance, and sadomasochism are normal and pleasurable."[42] The feminist anti-pornography organisation Stop Porn Culture called for a boycott of the movie based on the book because of its sex scenes involving bondage and violence.[43] By contrast, Timothy Laurie and Jessica Kean argue that "film fleshes out an otherwise legalistic concept like 'consent' into a living, breathing, and at times, uncomfortable interpersonal experience," and "dramatises the dangers of unequal negotiation and the practical complexity of identifying one's limits and having them respected."[44]
Several critics and scientists have expressed concern that the nature of the main couple's relationship is not BDSM at all, but rather is characteristic of an abusive relationship. In 2013, social scientist Professor Amy E. Bonomi published a study wherein the books were read by multiple professionals and assessed for characteristics of intimate partner violence, or IPV, using the CDC's standards for emotional abuse and sexual violence. The study found that nearly every interaction between Ana and Christian was emotionally abusive in nature, including stalking, intimidation, and isolation. The study group also observed pervasive sexual violence within the CDC's definition, including Christian's use of alcohol to circumvent Ana's ability to consent, and that Ana exhibits classic signs of an abused woman, including constant perceived threat, stressful managing, and altered identity.[45][46]
A second study in 2014 was conducted to examine the health of women who had read the series, compared with a control group that had never read any part of the novels. The results showed a correlation between having read at least the first book and exhibiting signs of an eating disorder, having romantic partners that were emotionally abusive and/or engaged in stalking behaviour, engaging in binge drinking in the last month, and having 5 or more sexual partners before age 24. The authors could not conclude whether women already experiencing these "problems" were drawn to the series, or if the series influenced these behaviours to occur after reading by creating underlying context.[47] The study's lead researcher contends that the books romanticise dangerous behaviour and "perpetuate dangerous abuse standards."[48] The study was limited in that only women up to age 24 were studied, and no distinction was made among the reader sample between women who enjoyed the series and those that had a strong negative opinion of it, having only read it out of curiosity due to the media hype or other obligation.[49]
At the beginning of the media hype, Dr. Drew and sexologist Logan Levkoff discussed on The Today Show[50] whether the book perpetuated violence against women; Levkoff said that while that is an important subject, this trilogy had nothing to do with it?ÿ this was a book about a consensual relationship. Dr. Drew commented that the book was "horribly written" in addition to being "disturbing" but stated that "if the book enhances women's real-life sex lives and intimacy, so be it."[51]
In March 2012, branches of the public library in Brevard County, Florida, removed copies of Fifty Shades of Grey from their shelves, with an official stating that it did not meet the selection criteria for the library and that reviews for the book had been poor. A representative for the library stated that it was due to the book's sexual content and that other libraries had declined to purchase copies for their branches.[52] Deborah Caldwell-Stone of the American Library Association commented that "If the only reason you don't select a book is that you disapprove of its content, but there is demand for it, there's a question of whether you're being fair. In a public library there is usually very little that would prevent a book from being on the shelf if there is a demand for the information."[52] Brevard County public libraries later made their copies available to their patrons due to public demand.[53]
In Maca, Brazil, Judge Raphael Queiroz Campos ruled in January 2013 that bookstores throughout the city must either remove the series entirely from their shelves or ensure that the books are wrapped and placed out of the reach of minors.[54] The judge stated that he was prompted to make such an order after seeing children reading them,[55] basing his decision on a law stating that "magazines and publications whose content is improper or inadequate for children and adolescents can only be sold if sealed and with warnings regarding their content".[56]
In February 2015, the Malaysian Home Ministry banned the Fifty Shades of Grey books shortly after banning its film adaption after permitting them for three years in local bookstores, citing morality-related reasons.[57]
A film adaptation of the book was produced by Focus Features,[58] Michael De Luca Productions, and Trigger Street Productions,[59] with Universal Pictures and Focus Features securing the rights to the trilogy in March 2012.[60] Universal is also the film's distributor. Charlie Hunnam was originally cast in the role of Christian Grey alongside Dakota Johnson in the role of Anastasia Steele,[61][62] but Hunnam gave up the part in October 2013,[63] with Jamie Dornan announced for the role on 23 October.[64]
The film was released on 13 February 2015,[2] and although popular at the box office, critical reactions were mixed to negative.[65]
E. L. James announced the film's soundtrack would be released on 10 February 2015.[66][67] Prior to the soundtrack's release, the first single, "Earned It", by The Weeknd, was released on 24 December 2014.[68] On 7 January 2015, the second single, "Love Me like You Do" by Ellie Goulding was released.[69] Australian singer Sia released the soundtrack's third single, "Salted Wound", on 27 January 2015.[70]
An album of songs selected by E. L. James was released on 11 September 2012 by EMI Classics under the title Fifty Shades of Grey: The Classical Album, and reached number four on the US Billboard classical music albums chart in October 2012.[71][72] A Seattle P-I reviewer favourably wrote that the album would appeal both to fans of the series and to "those who have no intention of reading any of the Grey Shades".[73]
The Fifty Shades of Grey trilogy has inspired many parodies in print,[74][75] in film, online, and on stage. In November 2012, Universal Studios attempted to prevent the release of Fifty Shades of Grey: A XXX Adaptation, a pornographic film based on the novel, citing copyright and trademark infringement. Smash Pictures, the porn producer, later responded to the lawsuit with a counterclaim that "much or all" of the Fifty Shades material was placed in the public domain in its original Twilight-based form,[76] but later capitulated and stopped production of their film.[77]
Amazon.com lists over 50 book parodies, e.g.:
Stage productions include:
What is the point of a toll road?
help recoup the cost of road construction and maintenance🚨A toll road, also known as a turnpike or tollway, is a public or private road for which a fee (or toll) is assessed for passage. It is a form of road pricing typically implemented to help recoup the cost of road construction and maintenance.
Toll roads have existed in some form since antiquity, with tolls levied on passing travellers on foot, wagon or horseback; but their prominence increased with the rise of the automobile,[citation needed] and many modern tollways charge fees for motor vehicles exclusively. The amount of the toll usually varies by vehicle type, weight, or number of axles, with freight trucks often charged higher rates than cars.
Tolls are often collected at toll booths, toll houses, plazas, stations, bars, or gates. Some toll collection points are unmanned and the user deposits money in a machine which opens the gate once the correct toll has been paid. To cut costs and minimise time delay many tolls today are collected by some form of automatic or electronic toll collection equipment which communicates electronically with a toll payer's transponder. Some electronic toll roads also maintain a system of toll booths so people without transponders can still pay the toll, but many newer roads now use automatic number plate recognition to charge drivers who use the road without a transponder, and some older toll roads are being upgraded with such systems.
Criticisms of toll roads include the time taken to stop and pay the toll, and the cost of the toll booth operatorsup to about one third of revenue in some cases. Automated toll paying systems help minimise both of these. Others object to paying "twice" for the same road: in fuel taxes and with tolls.
In addition to toll roads, toll bridges and toll tunnels are also used by public authorities to generate funds to repay the cost of building the structures. Some tolls are set aside to pay for future maintenance or enhancement of infrastructure, or are applied as a general fund by local governments, not being earmarked for transport facilities. This is sometimes limited or prohibited by central government legislation. Also road congestion pricing schemes have been implemented in a limited number of urban areas as a transportation demand management tool to try to reduce traffic congestion and air pollution.[1]
Toll roads have existed for at least the last 2,700 years, as tolls had to be paid by travellers using the SusaÿBabylon highway under the regime of Ashurbanipal, who reigned in the 7th century BC.[2] Aristotle and Pliny refer to tolls in Arabia and other parts of Asia. In India, before the 4th century BC, the Arthashastra notes the use of tolls. Germanic tribes charged tolls to travellers across mountain passes.
A 14th-century example (though not for a road) is Castle Loevestein in the Netherlands, which was built at a strategic point where two rivers meet. River tolls were charged on boats sailing along the river. The ?resund in Scandinavia was once subject to a toll to the Danish Monarch, which once provided a sizable portion of the king's revenue.
Many modern European roads were originally constructed as toll roads in order to recoup the costs of construction, maintenance and as a source of tax money that is paid primarily by someone other than the local residents. In 14th-century England, some of the most heavily used roads were repaired with money raised from tolls by pavage grants. Widespread toll roads sometimes restricted traffic so much, by their high tolls, that they interfered with trade and cheap transportation needed to alleviate local famines or shortages.[3]
Tolls were used in the Holy Roman Empire in the 14th and 15th centuries.
Industrialisation in Europe needed major improvements to the transport infrastructure which included many new or substantially improved roads, financed from tolls. The A5 road in Britain was built to provide a robust transport link between Britain and Ireland and had a toll house every few miles.
In the 20th century, road tolls were introduced in Europe to finance the construction of motorway networks and specific transport infrastructure such as bridges and tunnels. Italy was the first European country to charge motorway tolls, on a 50?km motorway section near Milan in 1924. It was followed by Greece, which made users pay for the network of motorways around and between its cities in 1927. Later in the 1950s and 1960s, France, Spain and Portugal started to build motorways largely with the aid of concessions, allowing rapid development of this infrastructure without massive State debts. Since then, road tolls have been introduced in the majority of the EU Member States.[4]
In the United States, prior to the introduction of the Interstate Highway System and the large federal grants supplied to states to build it, many states constructed their first controlled-access highways by floating bonds backed by toll revenues. Starting with the Pennsylvania Turnpike in 1940, and followed by similar roads in New Jersey (Garden State Parkway (1946) and New Jersey Turnpike, 1952), New York (New York State Thruway, 1954), Massachusetts (Massachusetts Turnpike, 1957), and others, numerous states throughout the 1950s established major toll roads. With the establishment of the Interstate Highway System in the late 1950s, toll road construction in the U.S. slowed down considerably, as the federal government now provided the bulk of funding to construct new freeways, and regulations required that such Interstate highways be free from tolls. Many older toll roads were added to the Interstate System under a grandfather clause that allowed tolls to continue to be collected on toll roads that predated the system. Some of these such as the Connecticut Turnpike and the RichmondÿPetersburg Turnpike later removed their tolls when the initial bonds were paid off. Many states, however, have maintained the tolling of these roads, however, as a consistent source of revenue.
As the Interstate Highway System approached completion during the 1980s, states began constructing toll roads again to provide new controlled-access highways which were not part of the original interstate system funding. Houston's outer beltway of interconnected toll roads began in 1983, and many states followed over the last two decades of the 20th century adding new toll roads, including the tollway system around Orlando, Florida, Colorado's E-470, and Georgia State Route 400.
London, in an effort to reduce traffic within the city, instituted the London congestion charge in 2003, effectively making all roads within the city tolled.
In the United States, as states looked for ways to construct new freeways without federal funding again, to raise revenue for continued road maintenance, and to control congestion, new toll road construction saw significant increases during the first two decades of the 21st century. Spurred on by two innovations, the electronic toll collection system, and the advent of high occupancy and express lane tolls, many areas of the U.S saw large road building projects in major urban areas. Electronic toll collection, first introduced in the 1980s, reduces operating costs by removing toll collectors from roads. Tolled express lanes, by which certain lanes of a freeway are designated "toll only", increases revenue by allowing a free-to-use highway collect revenue by allowing drivers to bypass traffic jams by paying a toll. The E-ZPass system, compatible with many state systems, is the largest ETC system in the U.S., and is used for both fully tolled highways and tolled express lanes. Maryland Route 200 and the Triangle Expressway in North Carolina were the first toll roads built without toll booths, with drivers charged via ETC or by optical license plate recognition and are billed by mail.
Turnpike trusts were established in England and Wales from about 1706 in response to the need for better roads than the few and poorly-maintained tracks then available. Turnpike trusts were set up by individual Acts of Parliament, with powers to collect road tolls to repay loans for building, improving, and maintaining the principal roads in Britain. At their peak, in the 1830s, over 1,000 trusts[5] administered around 30,000 miles (48,000?km) of turnpike road in England and Wales, taking tolls at almost 8,000 toll-gates.[6] The trusts were ultimately responsible for the maintenance and improvement of most of the main roads in England and Wales, which were used to distribute agricultural and industrial goods economically. The tolls were a source of revenue for road building and maintenance, paid for by road users and not from general taxation. The turnpike trusts were gradually abolished from the 1870s. Most trusts improved existing roads, but some new roads, usually only short stretches, were also built. Thomas Telford's Holyhead road followed Watling Street from London but was exceptional in creating a largely new route beyond Shrewsbury, and especially beyond Llangollen. Built in the early 19th century, with many toll booths along its length, most of it is now the A5. In the modern day, one major toll road is the M6 Toll, relieving traffic congestion on the M6 in Birmingham. A few notable bridges and tunnels continue as toll roads including the Severn Bridge, the Dartford Crossing and Mersey Gateway bridge.
Some cities in Canada had toll roads in the 19th century. Roads radiating from Toronto required users to pay at toll gates along the street (Yonge Street, Bloor Street, Davenport Road, Kingston Road)[7] and disappeared after 1895.[8]
19th-century plank roads were usually operated as toll roads. One of the first U.S. motor roads, the Long Island Motor Parkway (which opened on October 10, 1908) was built by William Kissam Vanderbilt II, the great-grandson of Cornelius Vanderbilt. The road was closed in 1938 when it was taken over by the state of New York in lieu of back taxes.[9][10]
Road tolls were levied traditionally for a specific access (e.g. city) or for a specific infrastructure (e.g. roads, bridges). These concepts were widely used until the last century. However, the evolution in technology made it possible to implement road tolling policies based on different concepts. The different charging concepts are designed to suit different requirements regarding purpose of the charge, charging policy, the network to the charge, tariff class differentiation etc.:[11]
Time Based Charges and Access Fees: In a time-based charging regime, a road user has to pay for a given period of time in which they may use the associated infrastructure. For the practically identical access fees, the user pays for the access to a restricted zone for a period or several days.
Motorway and other Infrastructure Tolling: The term tolling is used for charging a well-defined special and comparatively costly infrastructure, like a bridge, a tunnel, a mountain pass, a motorway concession or the whole motorway network of a country. Classically a toll is due when a vehicle passes a tolling station, be it a manual barrier-controlled toll plaza or a free-flow multi-lane station.
Distance or Area Charging: In a distance or area charging system concept, vehicles are charged per total distance driven in a defined area.
Some toll roads charge a toll in only one direction. Examples include the Sydney Harbour Bridge, Sydney Harbour Tunnel and Eastern Distributor (these all charge tolls city-bound) in Australia, the Severn Bridges where the M4 and M48 in Great Britain crosses the River Severn, in the United States, crossings between Pennsylvania and New Jersey operated by Delaware River Port Authority and crossings between New Jersey and New York operated by Port Authority of New York and New Jersey.This technique is practical where the detour to avoid the toll is large or the toll differences are small.
.
.
Traditionally tolls were paid by hand at a toll gate. Although payments may still be made in cash, it is more common now to pay by credit card, by pre-paid card,[citation needed] or by an electronic toll collection system. In some places, payment is made using stickers which are affixed to the windscreen.
Three systems of toll roads exist: open (with mainline barrier toll plazas); closed (with entry/exit tolls) and open road (no toll booths, only electronic toll collection gantries at entrances and exits, or at strategic locations on the mainline of the road). Modern toll roads often use a combination of the three, with various entry and exit tolls supplemented by occasional mainline tolls: for example the Pennsylvania Turnpike and the New York State Thruway implement both systems in different sections.
On an open toll system, all vehicles stop at various locations along the highway to pay a toll. (Not to be confused with "open road tolling", where no vehicles stop to pay toll.) While this may save money from the lack of need to construct toll booths at every exit, it can cause traffic congestion while traffic queues at the mainline toll plazas (toll barriers). It is also possible for motorists to enter an 'open toll road' after one toll barrier and exit before the next one, thus travelling on the toll road toll-free. Most open toll roads have ramp tolls or partial access junctions to prevent this practice, known in the U.S. as "shunpiking".
With a closed system, vehicles collect a ticket when entering the highway. In some cases, the ticket displays the toll to be paid on exit. Upon exit, the driver must pay the amount listed for the given exit. Should the ticket be lost, a driver must typically pay the maximum amount possible for travel on that highway. Short toll roads with no intermediate entries or exits may have only one toll plaza at one end, with motorists traveling in either direction paying a flat fee either when they enter or when they exit the toll road. In a variant of the closed toll system, mainline barriers are present at the two endpoints of the toll road, and each interchange has a ramp toll that is paid upon exit or entry. In this case, a motorist pays a flat fee at the ramp toll and another flat fee at the end of the toll road; no ticket is necessary. In addition, with most systems, motorists may pay tolls only with cash and/or change; debit and credit cards are not accepted. However, some toll roads may have travel plazas with ATMs so motorists can stop and withdraw cash for the tolls.
The toll is calculated by the distance travelled on the toll road or the specific exit chosen. In the United States, for instance, the Kansas Turnpike, Ohio Turnpike, Pennsylvania Turnpike, New Jersey Turnpike, most of the Indiana Toll Road, New York State Thruway, and Florida's Turnpike currently implement closed systems.
The Union Toll Plaza on the Garden State Parkway was the first ever to use an automated toll collection machine. A plaque commemorating the event includes the first quarter collected at its toll booths.[12]
The first major deployment of an RFID electronic toll collection system in the United States was on the Dallas North Tollway in 1989 by Amtech (see TollTag). The Amtech RFID technology used on the Dallas North Tollway was originally developed at Sandia Labs for use in tagging and tracking livestock. In the same year, the Telepass active transponder RFID system was introduced across Italy.
Highway 407 in the province of Ontario, Canada, has no toll booths, and instead reads a transponder mounted on the windshields of each vehicle using the road (the rear licence plates of vehicles lacking a transponder are photographed when they enter and exit the highway). This made the highway the first all-automated toll highway in the world. A bill is mailed monthly for usage of the 407. Lower charges are levied on frequent 407 users who carry electronic transponders in their vehicles. The approach has not been without controversy: In 2003 the 407 ETR settled[13] a class action with a refund to users.
Throughout most of the East Coast of the United States, E-ZPass (operated under the brand I-Pass in Illinois) is accepted on almost all toll roads. Similar systems include SunPass in Florida, FasTrak in California, Good to Go in Washington State, and ExpressToll in Colorado. The systems use a small radio transponder mounted in or on a customer's vehicle to deduct toll fares from a pre-paid account as the vehicle passes through the toll barrier. This reduces manpower at toll booths and increases traffic flow and fuel efficiency by reducing the need for complete stops to pay tolls at these locations.
By designing a tollgate specifically for electronic collection, it is possible to carry out open-road tolling, where the customer does not need to slow at all when passing through the tollgate. The U.S. state of Texas is testing a system on a stretch of Texas 121 that has no toll booths. Drivers without a TollTag have their license plate photographed automatically and the registered owner will receive a monthly bill, at a higher rate than those vehicles with TollTags.[14]
The first all-electric toll road in the eastern United States, the InterCounty Connector (Maryland Route 200) was partially opened to traffic in February 2011,[15] and the final segment was completed in November 2014.[16] The first section of another all-electronic toll road, the Triangle Expressway, opened at the beginning of 2012 in North Carolina.[17]
Some toll roads are managed under such systems as the Build-Operate-Transfer (BOT) system. Private companies build the roads and are given a limited franchise. Ownership is transferred to the government when the franchise expires. This type of arrangement is prevalent in Australia, Canada, Hong Kong, India, South Korea, Japan and the Philippines. The BOT system is a fairly new concept that is gaining ground in the United States, with California, Delaware, Florida, Illinois, Indiana, Mississippi,[18] Texas, and Virginia already building and operating toll roads under this scheme. Pennsylvania, Massachusetts, New Jersey, and Tennessee are also considering the BOT methodology for future highway projects.
The more traditional means of managing toll roads in the United States is through semi-autonomous public authorities. Kansas, Maryland, Massachusetts, New Hampshire, New Jersey, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, and West Virginia manage their toll roads in this manner. While most of the toll roads in California, Delaware, Florida, Texas, and Virginia are operating under the BOT arrangement, a few of the older toll roads in these states are still operated by public authorities.
In France, all toll roads are operated by private companies, and the government takes a part of their profit.[citation needed]
Toll roads have been criticized as being inefficient in various ways:[19]
A number of additional criticisms are also directed at toll roads in general:
When were subs first used in english football?
as early as the 1850s🚨In association football, a substitute is a player who is brought on to the pitch during a match in exchange for an existing player. Substitutions are generally made to replace a player who has become tired or injured, or who is performing poorly, or for tactical reasons (such as bringing a striker on in place of a defender). Unlike some sports (such as American football or ice hockey), a player who has been substituted during a match may take no further part in it.
Most competitions only allow each team to make a maximum of three substitutions during a game, although more substitutions are often permitted in non-competitive fixtures such as friendlies. Allowing a fourth substitution in extra time is currently being trialed at several tournaments, including the 2016 Summer Olympic Games, the 2017 FIFA Confederations Cup and the 2017 CONCACAF Gold Cup final.[1][2][3][4][5] Each team nominates a number of players (typically between five and seven, depending on the competition) who may be used as substitutes; these players typically sit in the technical area with the coaches, and are said to be "on the bench". When the substitute enters the field of play it is said they have come on or have been brought on, while the player they are substituting is coming off or being brought off.
A player who is noted for frequently making appearances, or scoring important goals, as a substitute is often informally known as a "super sub".
The origin of football substitutes goes back to at least the early 1860s as part of English public school football games. The original use of the term "substitute" in football was to describe the replacement of players who failed to turn up for matches. For example, in 1863, a match reports states: "The Charterhouse eleven played a match in cloisters against some old Carthusians but in consequence of the non-appearance of some of those who were expected it was necessary to provide three substitutes.[6] The substitution of absent players happened as early as the 1850s, for example from Eton College where the term "emergencies" is used.[7] Numerous references to players acting as a "substitute" occur in matches in the mid-1860s[8] where it is not indicated whether these were replacements of absent players or of players injured during the match.
The first use of a substitute in international football was on 15 April 1889, in the match between Wales and Scotland at Wrexham. Wales's original goalkeeper, Jim Trainer, failed to arrive; local amateur player Alf Pugh started the match and played for some 20 minutes until the arrival of Sam Gillam, who took over from him.[9]
Substitution during games was first permitted in 1958.[10] (Although as early as the qualifying phase for the 1954 World Cup, Horst Eckel of Germany is recorded as having been replaced by Richard Gottinger in their match with the Saarland on 11 October 1953.)[11] The use of substitutes in World Cup Finals matches was not allowed until the 1970 tournament.[12]
The number of substitutes usable in a competitive match has increased from zeromeaning teams were reduced if players' injuries could not allow them to play onto one (plus another for an injured goalkeeper) in 1958; to two out of a possible five in 1988;[13] to two plus one (injured goalkeeper) in 1994,[14] to three in 1995;[15][16] and most recently to a fourth substitute in certain competitions in extra time.[17]
Substitutions during matches in the English Football League were first permitted in the 1965ÿ66 season. During the first two seasons after the law was introduced, each side was permitted only one substitution during a game. Moreover, the substitute could only replace an injured player. From the 1967ÿ68 season, this rule was relaxed to allow substitutions for tactical reasons.[18]
On 21 August 1965, Keith Peacock of Charlton Athletic became the first substitute used in the Football League when he replaced injured goalkeeper Mike Rose eleven minutes into their away match against Bolton Wanderers.[19] On the same day, Bobby Knox became the first ever substitute to score a goal when he scored for Barrow against Wrexham.[20]
Archie Gemmill of St Mirren was the first substitute to come on in a Scottish first-class match, on 13 August 1966 in a League Cup tie against Clyde when he replaced Jim Clunie after 23 minutes.[18]
The first official substitute in a Scottish League match was Paul Conn for Queen's Park vs Albion Rovers in a Division 2 match on 24 August 1966. Previously, on 20 January 1917, a player called Morgan came on for the injured Morrison of Partick Thistle after 5 minutes against Rangers at Firhill, but this was an isolated case and the Scottish League did not authorise substitutes until 1966.[18]
In later years, the number of substitutes permitted in Football League matches has gradually increased; at present each team is permitted to name either five or seven substitutes depending on the country and competition, of which a maximum of three may be used. In England, the Premier League increased the number of players on the bench to five in 1996, and it was announced that the number available on the bench would be seven for the 2008ÿ09 season.[21]
According to the Laws of the Game (2014/15):[22]
A player may only be substituted during a stoppage in play and with the permission of the referee. The player to be substituted (outgoing player) must have left the field of play before the substitute (incoming player) may enter the field of play; at that point the substitute becomes a player and the person substituted ceases to be a player. The incoming player may only enter the field at the half-way line. Failure to comply with these provisions may be punished by a caution (yellow card).
A player that has been substituted may take no further part in a match, except where return substitutions are permitted.
Unused substitutes still on the bench, as well as players who have been already substituted, remain under the authority of the referee. These are liable for misconduct, though cannot be said to have committed a foul. For example, in the 2002 FIFA World Cup, Claudio Caniggia was shown the red card for cursing at the referee from the bench.
Under the Laws, the referee has no specific power to force a player to be substituted, even if the team manager or captain has ordered their player to be substituted. If a player refuses to be substituted play may simply resume with that player on the field. However, in some situations players may still be liable to punishment with a caution (yellow card) for time wasting or unsporting behaviour.
A player who has been sent off (red card) may not be substituted; the team will have to make do with the remaining players. In the case of a goalkeeper who is sent off, such as in the 2006 UEFA Champions League Final, when Arsenal midfielder Robert Pirs was replaced by second-choice goalkeeper Manuel Almunia to replace Jens Lehmann, who received a red card less than 20 minutes into the match, the coach will usually (but is not required to) substitute an outfield player so that the backup goalkeeper can enter the game. If all substitutions have been used, or if no goalkeeper is available, an outfield player will take up the role of the goalkeeper. A famous example of this is when Chelsea goalkeepers Petr ?ech and Carlo Cudicini were both injured in the same game, which led to defender John Terry spending the remainder of the match in goal wearing third-choice goalkeeper Hilrio's shirt.[23]
According to the Laws of the Game, "up to a maximum of three substitutes may be used in any match played in an official competition organised under the auspices of FIFA, the confederations or the member associations." Also:
The term "super-sub" refers to a substitution made by the manager that subsequently saves the game, generally by scoring a late equalising or winning goal. Players regarded as "super-subs" include Azar Karadas for Brann, Santiago Solari for Real Madrid, Nwankwo Kanu for Arsenal, David Fairclough for Liverpool,[24] Adam Le Fondre for Reading,[25] Ole Gunnar Solskj?r and Javier Hernndez for Manchester United,[26][27] Mikael Forssell for Chelsea,[28] Leon Clarke for Wigan Athletic,[29] Brendon Santalab for Western Sydney Wanderers[30] Henrique for Brisbane Roar,[31] Archie Thompson, Joshua Kennedy and Tim Cahill for Australia.[32][33][34][35][36][37][38] and Stevie Kirk for Motherwell,[39]