Appearance
🎉Q&A Life🥳
Ipoh🚨
Petaling Jaya"🚨What are the main ethnic groups in malaysia?
What is p a n d a s?
a hypothesis that there exists a subset of children with rapid onset of obsessive-compulsive disorder (OCD) or tic disorders and these symptoms are caused by group A beta-hemolytic streptococcal (GABHS) infections🚨
Pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS) is a hypothesis that there exists a subset of children with rapid onset of obsessive-compulsive disorder (OCD) or tic disorders and these symptoms are caused by group A beta-hemolytic streptococcal (GABHS) infections.[1] The proposed link between infection and these disorders is that an initial autoimmune reaction to a GABHS infection produces antibodies that interfere with basal ganglia function, causing symptom exacerbations. It has been proposed that this autoimmune response can result in a broad range of neuropsychiatric symptoms.[2][3]
The PANDAS hypothesis was based on observations in clinical case studies at the US National Institutes of Health and in subsequent clinical trials where children appeared to have dramatic and sudden OCD exacerbations and tic disorders following infections.[4] There is supportive evidence for the link between streptococcus infection and onset in some cases of OCD and tics, but proof of causality has remained elusive.[5][6][7] The PANDAS hypothesis is controversial; whether it is a distinct entity differing from other cases of Tourette syndrome (TS)/OCD is debated.[3][8][9][10][11]
In addition to an OCD or tic disorder diagnosis, children may have other symptoms associated with exacerbations such as emotional lability, enuresis, anxiety, and deterioration in handwriting.[1] In the PANDAS model, this abrupt onset is thought to be preceded by a strep throat infection. As the clinical spectrum of PANDAS appears to resemble that of Tourette's syndrome, some researchers hypothesized that PANDAS and Tourette's may be associated; this idea is controversial and a focus for current research.[3][8][9][10][11]
The PANDAS diagnosis and the hypothesis that symptoms in this subgroup of patients are caused by infection are controversial.[1][3][9][12][13][14]
Whether the group of patients diagnosed with PANDAS have developed tics and OCD through a different mechanism (pathophysiology) than seen in other people diagnosed with Tourette syndrome is unclear.[10][11][12][15] Researchers are pursuing the hypothesis that the mechanism is similar to that of rheumatic fever, an autoimmune disorder triggered by streptococcal infections, where antibodies attack the brain and cause neuropsychiatric conditions.[1]
The molecular mimicry hypothesis is a proposed mechanism for PANDAS:[16] this hypothesis is that antigens on the cell wall of the streptococcal bacteria are similar in some way to the proteins of the heart valve, joints, or brain. Because the antibodies set off an immune reaction which damages those tissues, the child with rheumatic fever can get heart disease (especially mitral valve regurgitation), arthritis, and/or abnormal movements known as Sydenham's chorea or "St. Vitus' Dance".[17] In a typical bacterial infection, the body produces antibodies against the invading bacteria, and the antibodies help eliminate the bacteria from the body. In some rheumatic fever patients, autoantibodies may attack heart tissue, leading to carditis, or cross-react with joints, leading to arthritis.[16] In PANDAS, it is believed that tics and OCD are produced in a similar manner. One part of the brain that may be affected in PANDAS is the basal ganglia, which is believed to be responsible for movement and behavior. It is thought that similar to Sydenham's chorea, the antibodies cross-react with neuronal brain tissue in the basal ganglia to cause the tics and OCD that characterize PANDAS.[1][18] Studies neither disprove nor support this hypothesis: the strongest supportive evidence comes from a controlled study of 144 children (Mell et al, 2005), but prospective longitudinal studies have not produced conclusive results.[12]
According to Lombroso and Scahill, 2008, "(f)ive diagnostic criteria were proposed for PANDAS: (1) the presence of a tic disorder and/or OCD consistent with DSM-IV; (2) prepubertal onset of neuropsychiatric symptoms; (3) a history of a sudden onset of symptoms and/or an episodic course with abrupt symptom exacerbation interspersed with periods of partial or complete remission; (4) evidence of a temporal association between onset or exacerbation of symptoms and a prior streptococcal infection; and (5) adventitious movements (e.g., motoric hyperactivity and choreiform movements) during symptom exacerbation".[16] The children, originally described by Swedo et al in 1998, usually have dramatic, "overnight" onset of symptoms, including motor or vocal tics, obsessions, and/or compulsions.[19] Some studies have supported acute exacerbations associated with streptococcal infections among clinically defined PANDAS subjects (Murphy and Pichichero, 2002; Giulino et al, 2002); others have not (Luo et al, 2004; Perrin et al, 2004).[1]
Concerns have been raised that PANDAS may be overdiagnosed, as a significant number of patients diagnosed with PANDAS by community physicians did not meet the criteria when examined by specialists, suggesting the PANDAS diagnosis is conferred by community physicians without conclusive evidence.[12][20]
PANDAS is hypothesized to be an autoimmune disorder that results in a variable combination of tics, obsessions, compulsions, and other symptoms that may be severe enough to qualify for diagnoses such as chronic tic disorder, OCD, and Tourette syndrome (TS or TD). The cause is thought to be akin to that of Sydenham's chorea, which is known to result from childhood Group A streptococcal (GAS) infection leading to the autoimmune disorder acute rheumatic fever of which Sydenham's is one manifestation. Like Sydenham's, PANDAS is thought to involve autoimmunity to the brain's basal ganglia. Unlike Sydenham's, PANDAS is not associated with other manifestations of acute rheumatic fever, such as inflammation of the heart.[4]
Pichechero notes that PANDAS has not been validated as a disease classification, for several reasons. Its proposed age of onset and clinical features reflect a particular group of patients chosen for research studies, with no systematic studies of the possible relationship of GAS to other neurologic symptoms. There is controversy over whether its symptom of choreiform movements is distinct from the similar movements of Sydenham's. It is not known whether the pattern of abrupt onset is specific to PANDAS. Finally, there is controversy over whether there is a temporal relationship between GAS infections and PANDAS symptoms.[4]
To establish that a disorder is an autoimmune disorder, Witebsky criteria require
In addition, to show that a microorganism causes the disorder, the Koch postulates would require one show that the organism is present in all cases of the disorder, that the organism can be extracted from those with the disorder and be cultured, that transferring the organism into healthy subjects causes the disorder, and the organism can be reisolated from the infected party.[21] Giovanonni notes that the Koch postulates cannot be used in the case of postinfection disorders (such as PANDAS and SC) because the organism may no longer be present when symptoms emerge, multiple organisms may cause the symptoms, and the symptoms may be a rare reaction to a common pathogen.[21]
Treatment for children suspected of PANDAS is generally the same as the standard treatments for TS and OCD.[2][5][15] These include cognitive behavioral therapy and medications to treat OCD such as selective serotonin reuptake inhibitors (SSRIs);[5][15] and "conventional therapy for tics".[2]
A controlled study (Garvey, Perlmutter, et al, 1999) of prophylactic antibiotic treatment of 37 children found that penicillin V did not prevent GABHS infections or exacerbation of other symptoms; however, compliance was an issue in this study. A later study (Snider, Lougee, et al, 2005) found that penicillin and azithromycin decreased infections and symptom exacerbation. The sample size, controls, and methodology of that study were criticized.[1] Murphy, Kurlan and Leckman (2010) say, "The use of prophylactic antibiotics to treat PANDAS has become widespread in the community, although the evidence supporting their use is equivocal. The safety and efficacy of antibiotic therapy for patients meeting the PANDAS criteria needs to be determined in carefully designed trials";[5] de Oliveira and Pelajo (2009) say that because most studies to date have "methodologic issues, including small sample size, retrospective reports of the baseline year, and lack of an adequate placebo arm ... it is recommended to treat these patients only with conventional therapy".[2]
Evidence is insufficient to determine if tonsillectomy is effective.[2]
Prophylactic antibiotic treatments for tics and OCD are experimental[6] and controversial;[12] overdiagnosis of PANDAS may have led to overuse of antibiotics to treat tics or OCD in the absence of active infection.[12]
A single study of PANDAS patients showed efficacy of immunomodulatory therapy (intravenous immunoglobulin (IVIG) or plasma exchange) to symptoms,[1] but these results are unreplicated by independent studies as of 2010.[16][12] Kalra and Swedo wrote in 2009, "Because IVIG and plasma exchange both carry a substantial risk of adverse effects, use of these modalities should be reserved for children with particularly severe symptoms and a clear-cut PANDAS presentation.[15] The US National Institutes of Health and American Academy of Neurology 2011 guidelines say there is "inadequate data to determine the efficacy of plasmapheresis in the treatment of acute OCD and tic symptoms in the setting of PANDAS" and "insufficient evidence to support or refute the use of plasmapheresis in the treatment of acute OCD and tic symptoms in the setting of PANDAS", adding that the investigators in the only study of plasmapherisis were not blind to the results.[8] The Medical Advisory Board of the Tourette Syndrome Association said in 2006 that experimental treatments based on the autoimmune theory such as IVIG or plasma exchange should not be undertaken outside of formal clinical trials.[22] The American Heart Association's 2009 guidelines state that, as PANDAS is an unproven hypothesis and well-controlled studies are not yet available, they do "not recommend routine laboratory testing for GAS to diagnose, long-term antistreptococcal prophylaxis to prevent, or immunoregulatory therapy (e.g., intravenous immunoglobulin, plasma exchange) to treat exacerbations of this disorder".[23]
The debate surrounding the PANDAS hypothesis has societal implications; the media and the Internet have played a role in the PANDAS controversy.[5][24] Swerdlow (2005) summarized the societal implications of the hypothesis, and the role of the Internet in the controversy surrounding the PANDAS hypothesis:
...?perhaps the most controversial putative TS trigger is exposure to streptococcal infections. The ubiquity of strep throats, the tremendous societal implications of over-treatment (e.g., antibiotic resistance or immunosuppressant side effects) versus medical implications of under-treatment (e.g., potentially irreversible autoimmune neurologic injury) are serious matters. With the level of desperation among Internet-armed parents, this controversy has sparked contentious disagreements, too often lacking both objectivity and civility.[24]
Murphy, Kurlan and Leckman (2010) also discussed the influence of the media and the Internet in a paper that proposed a "way forward":
The potential link between common childhood infections and lifelong neuropsychiatric disorders is among the most tantalizing and clinically relevant concepts in modern neuroscience ... The link may be most relevant in this group of disorders collectively described as PANDAS. Of concern, public awareness has outpaced our scientific knowledge base, with multiple magazine and newspaper articles and Internet chat rooms calling this issue to the public's attention. Compared with ~ 200 reports listed on Medlinemany involving a single patient, and others reporting the same patients in different papers, with most of these reporting on subjects who do not meet the current PANDAS criteriathere are over 100,000 sites on the Internet where the possible StreptococcusÿOCDÿTD relationship is discussed. This gap between public interest in PANDAS and conclusive evidence supporting this link calls for increased scientific attention to the relationship between GAS and OCD/tics, particularly examining basic underlying cellular and immune mechanisms.[5]
Susan Swedo first described the entity in 1998.[19] In 2008 Lombroso and Scahill described five diagnostic criteria for PANDAS.[16] Revised criteria and guidelines for PANDAS was established by the NIMH in 2012 and updated in 2017.[25][26]
In a 2010 paper calling for "a way forward", Murphy, Kurlan and Leckman said: "It is time for the National Institutes of Health, in combination with advocacy and professional organizations, to convene a panel of experts not to debate the current data, but to chart a way forward. For now we have only to offer our standard therapies in treating OCD and tics, but one day we may have evidence that also allows us to add antibiotics or other immune-specific treatments to our armamentarium."[5] A 2011 paper by Singer proposed a new, "broader concept of childhood acute neuropsychiatric symptoms (CANS)", removing some of the PANDAS criteria in favor or requiring only acute-onset. Singer said there were "numerous causes for CANS", which was proposed because of the "inconclusive and conflicting scientific support" for PANDAS, including "strong evidence suggesting the absence of an important role for GABHS, a failure to apply published [PANDAS] criteria, and a lack of scientific support for proposed therapies".[27]
What is stored on ram in a computer?
data and machine code currently being used🚨Random-access memory (RAM /r?m/) is a form of computer data storage which stores data and machine code currently being used. A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement.
RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. Usually more than one bit of storage is accessed by the same address, and RAM devices often have multiple data lines and are said to be '8-bit' or '16-bit' etc. devices.
In today's technology, random-access memory takes the form of integrated circuits. RAM is normally associated with volatile types of memory (such as DRAM modules), where stored information is lost if power is removed, although non-volatile RAM has also been developed.[1] Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash.
Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970.[2]
Early computers used relays, mechanical counters[3] or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written. Drum memory could be expanded at relatively low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided.
The first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored-memory program was implemented in the Manchester Small-Scale Experimental Machine (SSEM) computer, which first successfully ran a program on 21 June 1948.[4] In fact, rather than the Williams tube memory being designed for the SSEM, the SSEM was a testbed to demonstrate the reliability of the memory.[5][6]
Magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory (DRAM) allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, and had to be periodically refreshed every few milliseconds before the charge could leak away. The Toshiba Toscal BC-1411 electronic calculator, which was introduced in 1965,[7][8] used a form of DRAM built from discrete components.[8] DRAM was then developed by Robert H. Dennard in 1968.
Prior to the development of integrated read-only memory (ROM) circuits, permanent (or read-only) random-access memory was often constructed using diode matrices driven by address decoders, or specially wound core rope memory planes.[citation needed]
The two widely used forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state of a six transistor memory cell. This form of RAM is more expensive to produce, but is generally faster and requires less dynamic power than DRAM. In modern computers, SRAM is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.
Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM (such as EEPROM and flash memory) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. These persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, and solid-state drives. ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction codes.
In general, the term RAM refers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the term DVD-RAM is somewhat of a misnomer since, unlike CD-RW or DVD-RW it does not need to be erased before reuse. Nevertheless, a DVD-RAM behaves much like a hard disc drive if somewhat slower.
The memory cell is the fundamental building block of computer memory. The memory cell is an electronic circuit that stores one bit of binary information and it must be set to store a logic 1 (high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it.
In SRAM, the memory cell is a type of flip-flop circuit, usually implemented using FETs. This means that SRAM requires very low power when not being accessed, but it is expensive and has low storage density.
A second type, DRAM, is based around a capacitor. Charging and discharging this capacitor can store a '1' or a '0' in the cell. However, this capacitor will slowly leak away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power, but it can achieve greater storage densities and lower unit costs compared to SRAM.
To be useful, memory cells must be readable and writeable. Within the RAM device, multiplexing and demultiplexing circuitry is used to select memory cells. Typically, a RAM device has a set of address lines A0... An, and for each combination of bits that may be applied to these lines, a set of memory cells are activated. Due to this addressing, RAM devices virtually always have a memory capacity that is a power of two.
Usually several memory cells share the same address. For example, a 4 bit 'wide' RAM chip has 4 memory cells for each address. Often the width of the memory and that of the microprocessor are different, for a 32 bit microprocessor, eight 4 bit RAM chips would be needed.
Often more addresses are needed than can be provided by a device. In that case, external multiplexors to the device are used to activate the correct device that is being accessed.
One can read and over-write data in RAM. Many computer systems have a memory hierarchy consisting of processor registers, on-die SRAM caches, external caches, DRAM, paging systems and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that access time to rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the highest possible average access performance while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom).
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.
In addition to serving as temporary storage and working space for the operating system and applications, RAM is used in numerous other ways.
Most modern operating systems employ a method of extending RAM capacity, known as "virtual memory". A portion of the computer's hard drive is set aside for a paging file or a scratch partition, and the combination of physical RAM and the paging file form the system's total memory. (For example, if a computer has 2 GB of RAM and a 1 GB page file, the operating system has 3 GB total memory available to it.) When the system runs low on physical memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as to read previously swapped information back into RAM. Excessive use of this mechanism results in thrashing and generally hampers overall system performance, mainly because hard drives are far slower than RAM.
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. A RAM disk loses the stored data when the computer is shut down, unless memory is arranged to have a standby battery source.
Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow for shorter access times. The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimes called shadowing, is fairly common in both computers and embedded systems.
As a common example, the BIOS in typical personal computers often has an option called use shadow BIOS or similar. When enabled, functions relying on data from the BIOSs ROM will instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM sections). Depending on the system, this may not result in increased performance, and may cause incompatibilities. For example, some hardware may be inaccessible to the operating system if shadow RAM is used. On some systems the benefit may be hypothetical because the BIOS is not used after booting in favor of direct hardware access. Free memory is reduced by the size of the shadowed ROMs.[9]
Several new types of non-volatile RAM, which will preserve data while powered down, are under development. The technologies used include carbon nanotubes and approaches utilizing Tunnel magnetoresistance. Amongst the 1st generation MRAM, a 128 KiB (128 G 210 bytes) chip was manufactured with 0.18?m technology in the summer of 2003.[citation needed] In June 2004, Infineon Technologies unveiled a 16?MiB (16?G?220 bytes) prototype again based on 0.18?m technology. There are two 2nd generation techniques currently in development: thermal-assisted switching (TAS)[10] which is being developed by Crocus Technology, and spin-transfer torque (STT) on which Crocus, Hynix, IBM, and several other companies are working.[11] Nantero built a functioning carbon nanotube memory prototype 10?GiB (10?G?230 bytes) array in 2004. Whether some of these technologies will be able to eventually take a significant market share from either DRAM, SRAM, or flash-memory technology, however, remains to be seen.
Since 2006, "solid-state drives" (based on flash memory) with capacities exceeding 256 gigabytes and performance far exceeding traditional disks have become available. This development has started to blur the definition between traditional random-access memory and "disks", dramatically reducing the difference in performance.
Some kinds of random-access memory, such as "EcoRAM", are specifically designed for server farms, where low power consumption is more important than speed.[12]
The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries, which is also referred to as bandwidth wall. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance.[13]
CPU speed improvements slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in a 2005 document.[14]
First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat... Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014.
A different concept is the processor-memory performance gap, which can be addressed by 3D integrated circuits that reduce the distance between the logic and memory aspects that are further apart in a 2D chip.[15] Memory subsystem design requires a focus on the gap, which is widening over time.[16] The main method of bridging the gap is the use of caches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed in order to deal with the widening of the gap, and the performance of high-speed modern computers are reliant on evolving caching techniques.[17] These can prevent the loss of processor performance, as it takes less time to perform the computation it has been initiated to complete.[18] There can be up to a 53% difference between the growth in speed of processor speeds and the lagging speed of main memory access.[19]
In contrast, RAM can be as fast as 5766 MB/s vs 477 MB/s for an SSD.[20]
Who was the first saturday night live host?
George Carlin🚨The first season of Saturday Night Live, an American sketch comedy series, originally aired in the United States on NBC from October 11, 1975 to July 31, 1976.
In 1974, NBC Tonight Show host Johnny Carson requested that the weekend broadcasts of "Best of Carson" (officially known as The Weekend Tonight Show Starring Johnny Carson) come to an end (back then, The Tonight Show was a 90-minute program), so that Carson could take two weeknights off and NBC would thus air those repeats on those nights rather than feed them to affiliates for broadcast on either Saturdays or Sundays. Given Carson's undisputed status as the king of late-night television, NBC heard his request as an ultimatum, fearing he might use the issue as grounds to defect to either ABC or CBS. To fill the gap, the network drew up some ideas and brought in Dick Ebersol ÿ a protg of legendary ABC Sports president Roone Arledge ÿ to develop a 90-minute late-night variety show. Ebersol's first order of business was hiring a young Canadian producer named Lorne Michaels to be the show-runner.[1]
Television production in New York was already in decline in the mid-1970s (The Tonight Show had departed for Los Angeles two years prior), so NBC decided to base the show at their studios in Rockefeller Center to offset the overhead of maintaining those facilities. Michaels was given Studio 8H, a converted radio studio that prior to that point was most famous for having hosted Arturo Toscanini and the NBC Symphony Orchestra from 1937 to 1951, but was being used largely for network election coverage by the mid-1970s.[citation needed]
When the first show aired on October 11, 1975 with George Carlin as its host, it was called NBC's Saturday Night because ABC featured a program at the same time titled Saturday Night Live with Howard Cosell. After ABC cancelled the Cosell program in 1976, the NBC program changed its name to Saturday Night Live on March 26, 1977 (and subsequently picked up Bill Murray from Cosell's show in 1977, as well). Every night, Don Pardo introduced the cast, a job he'd hold for 39 years until his death in 2014.
The original concept was for a comedy-variety show featuring young comedians, live musical performances, short films by Albert Brooks, and segments by Jim Henson featuring atypically adult and abstract characters from the Muppets world. Rather than have one permanent host, Michaels elected to have a different guest host each week. The first episode featured two musical guests (Billy Preston and Janis Ian), and the second episode, hosted by Paul Simon on October 18, was almost entirely a musical variety show with various acts. The Not Ready For Prime Time Players did not appear in this episode at all, other than as the bees with Simon telling them they were cancelled, and Chevy Chase in the opening and in "Weekend Update". Over the course of Season 1, sketch comedy would begin to dominate the show and SNL would more closely resemble its current format.
Andy Kaufman made several appearances that were popular with the audience over the season,[citation needed] while The Muppets' Land of Gorch bits were regarded as a poor fit with the rest of the show.[citation needed] The "Land Of Gorch" sketches were essentially cancelled after episode 10, although the associated Muppet characters still made sporadic appearances after that. After one final appearance at the start of season two, the Muppet characters were permanently dropped from SNL.
During the season, Michaels appeared on-camera twice, on April 24 and May 22, to make an offer to The Beatles to reunite on the show. In the first appearance, he offered a certified check of $3000. In the second appearance, he increased his offer to $3,200 and free hotel accommodations. John Lennon and Paul McCartney later both admitted that they were watching SNL from Lennon's apartment on May 8, the episode after Michaels' first offer, and briefly toyed with actually going down to the studio, but decided to stay in the apartment because they were too tired.[2][3]
The first cast member hired was Gilda Radner.[4] The rest of the cast included fellow Second City alumni Dan Aykroyd and John Belushi, as well as National Lampoon "Lemmings" alumnus Chevy Chase (whose trademark became his usual falls and opening spiel that cued the show's opening), Jane Curtin, Laraine Newman, and Garrett Morris. The original head writer was Michael O'Donoghue, a writer at National Lampoon who had worked alongside several cast members while directing The National Lampoon Radio Hour. The original theme music was written by future Academy Awardÿwinning composer Howard Shore, who ÿ along with his band (occasionally billed as the "All Nurse Band" or "Band of Angels"[citation needed]) ÿ was the original band leader on the show. Paul Shaffer, who would go on to lead David Letterman's band on Late Night and then The Late Show, was also band leader in the early years. George Coe was hired because NBC wanted to have an older person in the cast.[citation needed]
Much of the talent pool involved in the inaugural season was recruited from the National Lampoon Radio Hour, a nationally syndicated comedy series that often satirized current events.
This would be the only season for Coe and O'Donoghue as official cast members. While Coe was only billed in the premiere, he was seen in various small roles through the season before leaving the show altogether. O'Donoghue was credited through the Candice Bergen episode and would continue to work for the show as a writer, as well as an occasional featured performer (particularly as "Mr. Mike"), through season five.
bold denotes Weekend Update anchor
The original writing staff included Anne Beatts, Chevy Chase, Tom Davis, Al Franken, Lorne Michaels, Marilyn Suzanne Miller, Michael O'Donoghue, Herb Sargent, Tom Schiller, Rosie Shuster and Alan Zweibel. The head writers were Lorne Michaels and Michael O'Donoghue.
Who was the first person to suggest a geocentric universe?
Anaximander🚨In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system) is a superseded description of the universe with Earth at the center. Under the geocentric model, the Sun, Moon, stars, and planets all orbited Earth.[1] The geocentric model served as the predominant description of the cosmos in many ancient civilizations, such as those of Aristotle and Ptolemy.
Two observations supported the idea that Earth was the center of the Universe. First, from the view on Earth, the Sun appears to revolve around Earth once per day. While the Moon and the planets have their own motions, they also appear to revolve around Earth about once per day. The stars appeared to be on a celestial sphere, rotating once each day along an axis through the north and south geographic poles of Earth.[2] Second, Earth does not seem to move from the perspective of an Earth-bound observer; it appears to be solid, stable, and unmoving.
Ancient Greek, ancient Roman and medieval philosophers usually combined the geocentric model with a spherical Earth. It is not the same as the older flat Earth model implied in some mythology.[n 1][n 2][5] The ancient Jewish Babylonian uranography pictured a flat Earth with a dome-shaped, rigid canopy called the firmament placed over it (????- rq?a').[n 3][n 4][n 5][n 6][n 7][n 8] However, the ancient Greeks believed that the motions of the planets were circular and not elliptical, a view that was not challenged in Western culture until the 17th century through the synthesis of theories by Copernicus and Kepler.
The astronomical predictions of Ptolemy's geocentric model were used to prepare astrological and astronomical charts for over 1500 years. The geocentric model held sway into the early modern age, but from the late 16th century onward, it was gradually superseded by the Heliocentric model of Copernicus, Galileo and Kepler. There was much resistance to the transition between these two theories. Christian theologians were reluctant to reject a theory that agreed with Bible passages (e.g. "Sun, stand you still upon Gibeon", Joshua 10:12). Others felt a new, unknown theory could not subvert an accepted consensus for geocentrism.
The geocentric model entered Greek astronomy and philosophy at an early point; it can be found in pre-Socratic philosophy. In the 6th century BC, Anaximander proposed a cosmology with Earth shaped like a section of a pillar (a cylinder), held aloft at the center of everything. The Sun, Moon, and planets were holes in invisible wheels surrounding Earth; through the holes, humans could see concealed fire. About the same time, Pythagoras thought that the Earth was a sphere (in accordance with observations of eclipses), but not at the center; they believed that it was in motion around an unseen fire. Later these views were combined, so most educated Greeks from the 4th century BC on thought that the Earth was a sphere at the center of the universe.[12]
In the 4th century BC, two influential Greek philosophers, Plato and his student Aristotle, wrote works based on the geocentric model. According to Plato, the Earth was a sphere, stationary at the center of the universe. The stars and planets were carried around the Earth on spheres or circles, arranged in the order (outwards from the center): Moon, Sun, Venus, Mercury, Mars, Jupiter, Saturn, fixed stars, with the fixed stars located on the celestial sphere. In his "Myth of Er", a section of the Republic, Plato describes the cosmos as the Spindle of Necessity, attended by the Sirens and turned by the three Fates. Eudoxus of Cnidus, who worked with Plato, developed a less mythical, more mathematical explanation of the planets' motion based on Plato's dictum stating that all phenomena in the heavens can be explained with uniform circular motion. Aristotle elaborated on Eudoxus' system.
In the fully developed Aristotelian system, the spherical Earth is at the center of the universe, and all other heavenly bodies are attached to 47ÿ55 transparent, rotating spheres surrounding the Earth, all concentric with it. (The number is so high because several spheres are needed for each planet.) These spheres, known as crystalline spheres, all moved at different uniform speeds to create the revolution of bodies around the Earth. They were composed of an incorruptible substance called aether. Aristotle believed that the moon was in the innermost sphere and therefore touches the realm of Earth, causing the dark spots (macula) and the ability to go through lunar phases. He further described his system by explaining the natural tendencies of the terrestrial elements: Earth, water, fire, air, as well as celestial aether. His system held that Earth was the heaviest element, with the strongest movement towards the center, thus water formed a layer surrounding the sphere of Earth. The tendency of air and fire, on the other hand, was to move upwards, away from the center, with fire being lighter than air. Beyond the layer of fire, were the solid spheres of aether in which the celestial bodies were embedded. They, themselves, were also entirely composed of aether.
Adherence to the geocentric model stemmed largely from several important observations. First of all, if the Earth did move, then one ought to be able to observe the shifting of the fixed stars due to stellar parallax. In short, if the Earth was moving, the shapes of the constellations should change considerably over the course of a year. If they did not appear to move, the stars are either much farther away than the Sun and the planets than previously conceived, making their motion undetectable, or in reality they are not moving at all. Because the stars were actually much further away than Greek astronomers postulated (making movement extremely subtle), stellar parallax was not detected until the 19th century. Therefore, the Greeks chose the simpler of the two explanations. Another observation used in favor of the geocentric model at the time was the apparent consistency of Venus' luminosity, which implies that it is usually about the same distance from Earth, which in turn is more consistent with geocentrism than heliocentrism. In reality, that is because the loss of light caused by Venus' phases compensates for the increase in apparent size caused by its varying distance from Earth. Objectors to heliocentrism noted that terrestrial bodies naturally tend to come to rest as near as possible to the center of the Earth. Further barring the opportunity to fall closer the center, terrestrial bodies tend not to move unless forced by an outside object, or transformed to a different element by heat or moisture.
Atmospheric explanations for many phenomena were preferred because the EudoxanÿAristotelian model based on perfectly concentric spheres was not intended to explain changes in the brightness of the planets due to a change in distance.[13] Eventually, perfectly concentric spheres were abandoned as it was impossible to develop a sufficiently accurate model under that ideal. However, while providing for similar explanations, the later deferent and epicycle model was flexible enough to accommodate observations for many centuries.
Although the basic tenets of Greek geocentrism were established by the time of Aristotle, the details of his system did not become standard. The Ptolemaic system, developed by the Hellenistic astronomer Claudius Ptolemaeus in the 2nd century AD finally standardised geocentrism. His main astronomical work, the Almagest, was the culmination of centuries of work by Hellenic, Hellenistic and Babylonian astronomers. For over a millennium European and Islamic astronomers assumed it was the correct cosmological model. Because of its influence, people sometimes wrongly think the Ptolemaic system is identical with the geocentric model.
Ptolemy argued that the Earth was a sphere in the center of the universe, from the simple observation that half the stars were above the horizon and half were below the horizon at any time (stars on rotating stellar sphere), and the assumption that the stars were all at some modest distance from the center of the universe. If the Earth was substantially displaced from the center, this division into visible and invisible stars would not be equal.[n 9]
In the Ptolemaic system, each planet is moved by a system of two spheres: one called its deferent; the other, its epicycle. The deferent is a circle whose center point, called the eccentric and marked in the diagram with an X, is removed from the Earth. The original purpose of the eccentric was to account for the difference in length of the seasons (northern autumn was about five days shorter than spring during this time period) by placing the Earth away from the center of rotation of the rest of the universe. Another sphere, the epicycle, is embedded inside the deferent sphere and is represented by the smaller dotted line to the right. A given planet then moves around the epicycle at the same time the epicycle moves along the path marked by the deferent. These combined movements cause the given planet to move closer to and further away from the Earth at different points in its orbit, and explained the observation that planets slowed down, stopped, and moved backward in retrograde motion, and then again reversed to resume normal, or prograde, motion.
The deferent-and-epicycle model had been used by Greek astronomers for centuries along with the idea of the eccentric (a deferent which is slightly off-center from the Earth), which was even older. In the illustration, the center of the deferent is not the Earth but the spot marked X, making it eccentric (from the Greek ? ec- meaning "from," and ? kentron meaning "center"), from which the spot takes its name. Unfortunately, the system that was available in Ptolemy's time did not quite match observations, even though it was considerably improved over Hipparchus' system. Most noticeably the size of a planet's retrograde loop (especially that of Mars) would be smaller, and sometimes larger, than expected, resulting in positional errors of as much as 30 degrees. To alleviate the problem, Ptolemy developed the equant. The equant was a point near the center of a planet's orbit which, if you were to stand there and watch, the center of the planet's epicycle would always appear to move at uniform speed; all other locations would see non-uniform speed, like on the Earth. By using an equant, Ptolemy claimed to keep motion which was uniform and circular, although it departed from the Platonic ideal of uniform circular motion. The resultant system, which eventually came to be widely accepted in the west, seems unwieldy to modern astronomers; each planet required an epicycle revolving on a deferent, offset by an equant which was different for each planet. It predicted various celestial motions, including the beginning and end of retrograde motion, to within a maximum error of 10 degrees, considerably better than without the equant.
The model with epicycles is in fact a very good model of an elliptical orbit with low eccentricity. The well known ellipse shape does not appear to a noticeable extent when the eccentricity is less than 5%, but the offset distance of the "center" (in fact the focus occupied by the sun) is very noticeable even with low eccentricities as possessed by the planets.
To summarize, Ptolemy devised a system that was compatible with Aristotelian philosophy and managed to track actual observations and predict future movement mostly to within the limits of the next 1000 years of observations. The observed motions and his mechanisms for explaining them include:
The geocentric model was eventually replaced by the heliocentric model. The earliest heliocentric model, Copernican heliocentrism, could remove Ptolemy's epicycles because the retrograde motion could be seen to be the result of the combination of Earth and planet movement and speeds. Copernicus felt strongly that equants were a violation of Aristotelian purity, and proved that replacement of the equant with a pair of new epicycles was entirely equivalent. Astronomers often continued using the equants instead of the epicycles because the former was easier to calculate, and gave the same result.
It has been determined, in fact, that the Copernican, Ptolemaic and even the Tychonic models provided identical results to identical inputs. They are computationally equivalent. It wasn't until Kepler demonstrated a physical observation that could show that the physical sun is directly involved in determining an orbit that a new model was required.
The Ptolemaic order of spheres from Earth outward is:[15]
Ptolemy did not invent or work out this order, which aligns with the ancient Seven Heavens religious cosmology common to the major Eurasian religious traditions. It also follows the decreasing orbital periods of the moon, sun, planets and stars.
Muslim astronomers generally accepted the Ptolemaic system and the geocentric model,[16] but by the 10th century texts appeared regularly whose subject matter was doubts concerning Ptolemy (shukk).[17] Several Muslim scholars questioned the Earth's apparent immobility[18][19] and centrality within the universe.[20] Some Muslim astronomers believed that the Earth rotates around its axis, such as Abu Sa'id al-Sijzi (d. circa 1020).[21][22] According to al-Biruni, Sijzi invented an astrolabe called al-zraqؐ based on a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky."[22][23] The prevalence of this view is further confirmed by a reference from the 13th century which states:
According to the geometers [or engineers] (muhandisؐn), the Earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the Earth and not the stars.[22]
Early in the 11th century Alhazen wrote a scathing critique of Ptolemy's model in his Doubts on Ptolemy (c. 1028), which some have interpreted to imply he was criticizing Ptolemy's geocentrism,[24] but most agree that he was actually criticizing the details of Ptolemy's model rather than his geocentrism.[25]
In the 12th century, Arzachel departed from the ancient Greek idea of uniform circular motions by hypothesizing that the planet Mercury moves in an elliptic orbit,[26][27] while Alpetragius proposed a planetary model that abandoned the equant, epicycle and eccentric mechanisms,[28] though this resulted in a system that was mathematically less accurate.[29] Alpetragius also declared the Ptolemaic system as an imaginary model that was successful at predicting planetary positions but not real or physical. His alternative system spread through most of Europe during the 13th century.[30]
Fakhr al-Din al-Razi (1149ÿ1209), in dealing with his conception of physics and the physical world in his Matalib, rejects the Aristotelian and Avicennian notion of the Earth's centrality within the universe, but instead argues that there are "a thousand thousand worlds (alfa alfi 'awalim) beyond this world such that each one of those worlds be bigger and more massive than this world as well as having the like of what this world has." To support his theological argument, he cites the Qur'anic verse, "All praise belongs to God, Lord of the Worlds," emphasizing the term "Worlds."[20]
The "Maragha Revolution" refers to the Maragha school's revolution against Ptolemaic astronomy. The "Maragha school" was an astronomical tradition beginning in the Maragha observatory and continuing with astronomers from the Damascus mosque and Samarkand observatory. Like their Andalusian predecessors, the Maragha astronomers attempted to solve the equant problem (the circle around whose circumference a planet or the center of an epicycle was conceived to move uniformly) and produce alternative configurations to the Ptolemaic model without abandoning geocentrism. They were more successful than their Andalusian predecessors in producing non-Ptolemaic configurations which eliminated the equant and eccentrics, were more accurate than the Ptolemaic model in numerically predicting planetary positions, and were in better agreement with empirical observations.[31] The most important of the Maragha astronomers included Mo'ayyeduddin Urdi (d. 1266), Nasؐr al-Dؐn al-Tsؐ (1201ÿ1274), Qutb al-Din al-Shirazi (1236ÿ1311), Ibn al-Shatir (1304ÿ1375), Ali Qushji (c. 1474), Al-Birjandi (d. 1525), and Shams al-Din al-Khafri (d. 1550).[32] Ibn al-Shatir, the Damascene astronomer (1304ÿ1375 AD) working at the Umayyad Mosque, wrote a major book entitled Kitab Nihayat al-Sul fi Tashih al-Usul (A Final Inquiry Concerning the Rectification of Planetary Theory) on a theory which departs largely from the Ptolemaic system known at that time. In his book, Ibn al-Shatir, an Arab astronomer of the fourteenth century, E. S. Kennedy wrote "what is of most interest, however, is that Ibn al-Shatir's lunar theory, except for trivial differences in parameters, is identical with that of Copernicus (1473ÿ1543 AD)." The discovery that the models of Ibn al-Shatir are mathematically identical to those of Copernicus suggests the possible transmission of these models to Europe.[33] At the Maragha and Samarkand observatories, the Earth's rotation was discussed by al-Tusi and Ali Qushji (b. 1403); the arguments and evidence they used resemble those used by Copernicus to support the Earth's motion.[18][19]
However, the Maragha school never made the paradigm shift to heliocentrism.[34] The influence of the Maragha school on Copernicus remains speculative, since there is no documentary evidence to prove it. The possibility that Copernicus independently developed the Tusi couple remains open, since no researcher has yet demonstrated that he knew about Tusi's work or that of the Maragha school.[34][35]
Not all Greeks agreed with the geocentric model. The Pythagorean system has already been mentioned; some Pythagoreans believed the Earth to be one of several planets going around a central fire.[36] Hicetas and Ecphantus, two Pythagoreans of the 5th century BC, and Heraclides Ponticus in the 4th century BC, believed that the Earth rotated on its axis but remained at the center of the universe.[37] Such a system still qualifies as geocentric. It was revived in the Middle Ages by Jean Buridan. Heraclides Ponticus was once thought to have proposed that both Venus and Mercury went around the Sun rather than the Earth, but this is no longer accepted.[38] Martianus Capella definitely put Mercury and Venus in orbit around the Sun.[39] Aristarchus of Samos was the most radical. He wrote a work, which has not survived, on heliocentrism, saying that the Sun was at the center of the universe, while the Earth and other planets revolved around it.[40] His theory was not popular, and he had one named follower, Seleucus of Seleucia.[41]
In 1543, the geocentric system met its first serious challenge with the publication of Copernicus' De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres), which posited that the Earth and the other planets instead revolved around the Sun. The geocentric system was still held for many years afterwards, as at the time the Copernican system did not offer better predictions than the geocentric system, and it posed problems for both natural philosophy and scripture. The Copernican system was no more accurate than Ptolemy's system, because it still used circular orbits. This was not altered until Johannes Kepler postulated that they were elliptical (Kepler's first law of planetary motion).
With the invention of the telescope in 1609, observations made by Galileo Galilei (such as that Jupiter has moons) called into question some of the tenets of geocentrism but did not seriously threaten it. Because he observed dark "spots" on the moon, craters, he remarked that the moon was not a perfect celestial body as had been previously conceived. This was the first time someone could see imperfections on a celestial body that was supposed to be composed of perfect aether. As such, because the moon's imperfections could now be related to those seen on Earth, one could argue that neither was unique: rather, they were both just celestial bodies made from Earth-like material. Galileo could also see the moons of Jupiter, which he dedicated to Cosimo II de' Medici, and stated that they orbited around Jupiter, not Earth.[42] This was a significant claim as it would mean not only that not everything revolved around Earth as stated in the Ptolemaic model, but also showed a secondary celestial body could orbit a moving celestial body, strengthening the heliocentric argument that a moving Earth could retain the Moon.[43] Galileo's observations were verified by other astronomers of the time period who quickly adopted use of the telescope, including Christoph Scheiner, Johannes Kepler, and Giovan Paulo Lembo.[44]
In December 1610, Galileo Galilei used his telescope to observe that Venus showed all phases, just like the Moon. He thought that while this observation was incompatible with the Ptolemaic system, it was a natural consequence of the heliocentric system.
However, Ptolemy placed Venus' deferent and epicycle entirely inside the sphere of the Sun (between the Sun and Mercury), but this was arbitrary; he could just as easily have swapped Venus and Mercury and put them on the other side of the Sun, or made any other arrangement of Venus and Mercury, as long as they were always near a line running from the Earth through the Sun, such as placing the center of the Venus epicycle near the Sun. In this case, if the Sun is the source of all the light, under the Ptolemaic system:
If Venus is between Earth and the Sun, the phase of Venus must always be crescent or all dark.
If Venus is beyond the Sun, the phase of Venus must always be gibbous or full.
But Galileo saw Venus at first small and full, and later large and crescent.
This showed that with a Ptolemaic cosmology, the Venus epicycle can be neither completely inside nor completely outside of the orbit of the Sun. As a result, Ptolemaics abandoned the idea that the epicycle of Venus was completely inside the Sun, and later 17th century competition between astronomical cosmologies focused on variations of Tycho Brahe's Tychonic system (in which the Earth was still at the center of the universe, and around it revolved the Sun, but all other planets revolved around the Sun in one massive set of epicycles), or variations on the Copernican system.
Johannes Kepler analysed Tycho Brahe's famously accurate observations and afterwards constructed his three laws in 1609 and 1619, based on a heliocentric view where the planets move in elliptical paths. Using these laws, he was the first astronomer to successfully predict a transit of Venus (for the year 1631). The change from circular orbits to elliptical planetary paths dramatically improved the accuracy of celestial observations and predictions. Because the heliocentric model by Copernicus was no more accurate than Ptolemy's system, new observations were needed to persuade those who still held on to the geocentric model. However, Kepler's laws based on Brahe's data became a problem which geocentrists could not easily overcome.
In 1687, Isaac Newton stated the law of universal gravitation, described earlier as a hypothesis by Robert Hooke and others. His main achievement was to mathematically derive Kepler's laws of planetary motion from the law of gravitation, thus helping to prove the latter. This introduced gravitation as the force that both kept the Earth and planets moving through the heavens and also kept the air from flying away. The theory of gravity allowed scientists to construct a plausible heliocentric model for the solar system quickly. In his Principia, Newton explained his system of how gravity, previously thought of as an occult force (that is, an unexplained force), directed the movements of celestial bodies, and kept our solar system in its working order. His descriptions of centripetal force[45] were a breakthrough in scientific thought which used the newly developed differential calculus, and finally replaced the previous schools of scientific thought, i.e. those of Aristotle and Ptolemy. However, the process was gradual.
Several empirical tests of Newton's theory, explaining the longer period of oscillation of a pendulum at the equator and the differing size of a degree of latitude, gradually became available over the period 1673ÿ1738. In addition, stellar aberration was observed by Robert Hooke in 1674 and tested in a series of observations by Jean Picard over ten years finishing in 1680. However, it was not explained until 1729 when James Bradley provided an approximate explanation in terms of the Earth's revolution about the sun.
In 1838, astronomer Friedrich Wilhelm Bessel measured the parallax of the star 61 Cygni successfully, and disproved Ptolemy's claim that parallax motion did not exist. This finally confirmed the assumptions made by Copernicus, provided accurate, dependable scientific observations, and displayed truly how far away stars were from Earth.
A geocentric frame is useful for many everyday activities and most laboratory experiments, but is a less appropriate choice for solar-system mechanics and space travel. While a heliocentric frame is most useful in those cases, galactic and extra-galactic astronomy is easier if the sun is treated as neither stationary nor the center of the universe, but rotating around the center of our galaxy, and in turn our galaxy is also not at rest in the cosmic background.
Albert Einstein and Leopold Infeld wrote in The Evolution of Physics (1938): "Can we formulate physical laws so that they are valid for all CS (=coordinate systems), not only those moving uniformly, but also those moving quite arbitrarily, relative to each other? If this can be done, our difficulties will be over. We shall then be able to apply the laws of nature to any CS. The struggle, so violent in the early days of science, between the views of Ptolemy and Copernicus would then be quite meaningless. Either CS could be used with equal justification. The two sentences, 'the sun is at rest and the Earth moves', or 'the sun moves and the Earth is at rest', would simply mean two different conventions concerning two different CS. Could we build a real relativistic physics valid in all CS; a physics in which there would be no place for absolute, but only for relative, motion? This is indeed possible!"[46]
Despite giving more respectability to the geocentric view than Newtonian physics does,[47] relativity is not geocentric. Rather, relativity states that the Sun, the Earth, the Moon, Jupiter, or any other point for that matter could be chosen as a center of the solar system with equal validity.[48] For this reason Robert Sungenis, a modern geocentrist, spent much of Volume I of his book Galileo Was Wrong: The Church Was Right critiquing and trying to unravel the Special and General theories of Relativity.[49]
Relativity agrees with Newtonian predictions that regardless of whether the Sun or the Earth are chosen arbitrarily as the center of the coordinate system describing the solar system, the paths of the planets form (roughly) ellipses with respect to the Sun, not the Earth. With respect to the average reference frame of the fixed stars, the planets do indeed move around the Sun, which due to its much larger mass, moves far less than its own diameter and the gravity of which is dominant in determining the orbits of the planets. (In other words, the center of mass of the solar system is near the center of the Sun.) The Earth and Moon are much closer to being a binary planet; the center of mass around which they both rotate is still inside the Earth, but is about 4,624?km (2,873?mi) or 72.6% of the Earth's radius away from the centre of the Earth (thus closer to the surface than the center).[citation needed]
What the principle of relativity points out is that correct mathematical calculations can be made regardless of the reference frame chosen, and these will all agree with each other as to the predictions of actual motions of bodies with respect to each other. It is not necessary to choose the object in the solar system with the largest gravitational field as the center of the coordinate system in order to predict the motions of planetary bodies, though doing so may make calculations easier to perform or interpret. A geocentric coordinate system can be more convenient when dealing only with bodies mostly influenced by the gravity of the Earth (such as artificial satellites and the Moon), or when calculating what the sky will look like when viewed from Earth (as opposed to an imaginary observer looking down on the entire solar system, where a different coordinate system might be more convenient).
The Ptolemaic model of the solar system held sway into the early modern age; from the late 16th century onward it was gradually replaced as the consensus description by the heliocentric model. Geocentrism as a separate religious belief, however, never completely died out. In the United States between 1870 and 1920, for example, various members of the Lutheran ChurchÿMissouri Synod published articles disparaging Copernican astronomy, and geocentrism was widely taught within the synod during that period.[50] However, in the 1902 Theological Quarterly, A. L. Graebner claimed that the synod had no doctrinal position on geocentrism, heliocentrism, or any scientific model, unless it were to contradict Scripture. He stated that any possible declarations of geocentrists within the synod did not set the position of the church body as a whole.[51]
Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, in Joshua 10:12, the Sun and Moon are said to stop in the sky, and in Psalms the world is described as immobile.[52] Psalms 93:1 says in part "the world is established, firm and secure".) Contemporary advocates for such religious beliefs include Robert Sungenis (president of Bellarmine Theological Forum and author of the 2006 book Galileo Was Wrong).[53] These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview. Most contemporary creationist organizations reject such perspectives.[n 10]
After all, Copernicanism was the first major victory of science over religion, so it's inevitable that some folks would think that everything that's wrong with the world began there.
Morris Berman quotes a 2006 survey that show currently some 20% of the U.S. population believe that the sun goes around the Earth (geocentricism) rather than the Earth goes around the sun (heliocentricism), while a further 9% claimed not to know.[56] Polls conducted by Gallup in the 1990s found that 16% of Germans, 18% of Americans and 19% of Britons hold that the Sun revolves around the Earth.[57] A study conducted in 2005 by Jon D. Miller of Northwestern University, an expert in the public understanding of science and technology,[58] found that about 20%, or one in five, of American adults believe that the Sun orbits the Earth.[59] According to 2011 VTSIOM poll, 32% of Russians believe that the Sun orbits the Earth.[60]
The famous Galileo affair pitted the geocentric model against the claims of Galileo. In regards to the theological basis for such an argument, two Popes addressed the question of whether the use of phenomenological language would compel one to admit an error in Scripture. Both taught that it would not. Pope Leo XIII (1878ÿ1903) wrote:
we have to contend against those who, making an evil use of physical science, minutely scrutinize the Sacred Book in order to detect the writers in a mistake, and to take occasion to vilify its contents. ... There can never, indeed, be any real discrepancy between the theologian and the physicist, as long as each confines himself within his own lines, and both are careful, as St. Augustine warns us, "not to make rash assertions, or to assert what is not known as known". If dissension should arise between them, here is the rule also laid down by St. Augustine, for the theologian: "Whatever they can really demonstrate to be true of physical nature, we must show to be capable of reconciliation with our Scriptures; and whatever they assert in their treatises which is contrary to these Scriptures of ours, that is to Catholic faith, we must either prove it as well as we can to be entirely false, or at all events we must, without the smallest hesitation, believe it to be so." To understand how just is the rule here formulated we must remember, first, that the sacred writers, or to speak more accurately, the Holy Ghost "Who spoke by them, did not intend to teach men these things (that is to say, the essential nature of the things of the visible universe), things in no way profitable unto salvation." Hence they did not seek to penetrate the secrets of nature, but rather described and dealt with things in more or less figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even by the most eminent men of science. Ordinary speech primarily and properly describes what comes under the senses; and somewhat in the same way the sacred writers-as the Angelic Doctor also reminds us ÿ "went by what sensibly appeared", or put down what God, speaking to men, signified, in the way men could understand and were accustomed to.
Maurice Finocchiaro, author of a book on the Galileo affair, notes that this is "a view of the relationship between biblical interpretation and scientific investigation that corresponds to the one advanced by Galileo in the "Letter to the Grand Duchess Christina".[61] Pope Pius XII (1939ÿ1958) repeated his predecessor's teaching:
The first and greatest care of Leo XIII was to set forth the teaching on the truth of the Sacred Books and to defend it from attack. Hence with grave words did he proclaim that there is no error whatsoever if the sacred writer, speaking of things of the physical order "went by what sensibly appeared" as the Angelic Doctor says, speaking either "in figurative language, or in terms which were commonly used at the time, and which in many instances are in daily use at this day, even among the most eminent men of science". For "the sacred writers, or to speak more accurately ÿ the words are St. Augustine's ÿ the Holy Spirit, Who spoke by them, did not intend to teach men these things ÿ that is the essential nature of the things of the universe ÿ things in no way profitable to salvation"; which principle "will apply to cognate sciences, and especially to history", that is, by refuting, "in a somewhat similar way the fallacies of the adversaries and defending the historical truth of Sacred Scripture from their attacks".
In 1664, Pope Alexander VII republished the Index Librorum Prohibitorum (List of Prohibited Books) and attached the various decrees connected with those books, including those concerned with heliocentrism. He stated in a Papal Bull that his purpose in doing so was that "the succession of things done from the beginning might be made known [quo rei ab initio gestae series innotescat]".[62]
The position of the curia evolved slowly over the centuries towards permitting the heliocentric view. In 1757, during the papacy of Benedict XIV, the Congregation of the Index withdrew the decree which prohibited all books teaching the Earth's motion, although the Dialogue and a few other books continued to be explicitly included. In 1820, the Congregation of the Holy Office, with the pope's approval, decreed that Catholic astronomer Giuseppe Settele was allowed to treat the Earth's motion as an established fact and removed any obstacle for Catholics to hold to the motion of the Earth:
The Assessor of the Holy Office has referred the request of Giuseppe Settele, Professor of Optics and Astronomy at La Sapienza University, regarding permission to publish his work Elements of Astronomy in which he espouses the common opinion of the astronomers of our time regarding the Earths daily and yearly motions, to His Holiness through Divine Providence, Pope Pius VII. Previously, His Holiness had referred this request to the Supreme Sacred Congregation and concurrently to the consideration of the Most Eminent and Most Reverend General Cardinal Inquisitor. His Holiness has decreed that no obstacles exist for those who sustain Copernicus' affirmation regarding the Earth's movement in the manner in which it is affirmed today, even by Catholic authors. He has, moreover, suggested the insertion of several notations into this work, aimed at demonstrating that the above mentioned affirmation [of Copernicus], as it has come to be understood, does not present any difficulties; difficulties that existed in times past, prior to the subsequent astronomical observations that have now occurred. [Pope Pius VII] has also recommended that the implementation [of these decisions] be given to the Cardinal Secretary of the Supreme Sacred Congregation and Master of the Sacred Apostolic Palace. He is now appointed the task of bringing to an end any concerns and criticisms regarding the printing of this book, and, at the same time, ensuring that in the future, regarding the publication of such works, permission is sought from the Cardinal Vicar whose signature will not be given without the authorization of the Superior of his Order.[63]
In 1822, the Congregation of the Holy Office removed the prohibition on the publication of books treating of the Earth's motion in accordance with modern astronomy and Pope Pius VII ratified the decision:
The most excellent [cardinals] have decreed that there must be no denial, by the present or by future Masters of the Sacred Apostolic Palace, of permission to print and to publish works which treat of the mobility of the Earth and of the immobility of the sun, according to the common opinion of modern astronomers, as long as there are no other contrary indications, on the basis of the decrees of the Sacred Congregation of the Index of 1757 and of this Supreme [Holy Office] of 1820; and that those who would show themselves to be reluctant or would disobey, should be forced under punishments at the choice of [this] Sacred Congregation, with derogation of [their] claimed privileges, where necessary.[64]
The 1835 edition of the Catholic Index of Prohibited Books for the first time omits the Dialogue from the list.[61] In his 1921 papal encyclical, In praeclara summorum, Pope Benedict XV stated that, "though this Earth on which we live may not be the center of the universe as at one time was thought, it was the scene of the original happiness of our first ancestors, witness of their unhappy fall, as too of the Redemption of mankind through the Passion and Death of Jesus Christ".[65] In 1965 the Second Vatican Council stated that, "Consequently, we cannot but deplore certain habits of mind, which are sometimes found too among Christians, which do not sufficiently attend to the rightful independence of science and which, from the arguments and controversies they spark, lead many minds to conclude that faith and science are mutually opposed."[66] The footnote on this statement is to Msgr. Pio Paschini's, Vita e opere di Galileo Galilei, 2 volumes, Vatican Press (1964). Pope John Paul II regretted the treatment which Galileo received, in a speech to the Pontifical Academy of Sciences in 1992. The Pope declared the incident to be based on a "tragic mutual miscomprehension". He further stated:
Cardinal Poupard has also reminded us that the sentence of 1633 was not irreformable, and that the debate which had not ceased to evolve thereafter, was closed in 1820 with the imprimatur given to the work of Canon Settele. ... The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world's structure was, in some way, imposed by the literal sense of Sacred Scripture. Let us recall the celebrated saying attributed to Baronius "Spiritui Sancto mentem fuisse nos docere quomodo ad coelum eatur, non quomodo coelum gradiatur". In fact, the Bible does not concern itself with the details of the physical world, the understanding of which is the competence of human experience and reasoning. There exist two realms of knowledge, one which has its source in Revelation and one which reason can discover by its own power. To the latter belong especially the experimental sciences and philosophy. The distinction between the two realms of knowledge ought not to be understood as opposition.[67]
A few Orthodox Jewish leaders maintain a geocentric model of the universe based on the aforementioned Biblical verses and an interpretation of Maimonides to the effect that he ruled that the Earth is orbited by the sun.[68][69] The Lubavitcher Rebbe also explained that geocentrism is defensible based on the theory of Relativity, which establishes that "when two bodies in space are in motion relative to one another, ... science declares with absolute certainty that from the scientific point of view both possibilities are equally valid, namely that the Earth revolves around the sun, or the sun revolves around the Earth", although he also went on to refer to people who believed in geocentrism as "remaining in the world of Copernicus".[70]
The Zohar states: "The entire world and those upon it, spin round in a circle like a ball, both those at the bottom of the ball and those at the top. All God's creatures, wherever they live on the different parts of the ball, look different (in color, in their features) because the air is different in each place, but they stand erect as all other human beings, therefore, there are places in the world where, when some have light, others have darkness; when some have day, others have night."[71]
While geocentrism is important in Maimonides' calendar calculations,[72] the great majority of Jewish religious scholars, who accept the divinity of the Bible and accept many of his rulings as legally binding, do not believe that the Bible or Maimonides command a belief in geocentrism.[69][73]
Prominent cases of modern geocentrism in Islam are very isolated. Very few individuals promoted a geocentric view of the universe. One of them was Ahmed Raza Khan Barelvi, a Sunni scholar of Indian subcontinent. He rejected the heliocentric model and wrote a book[74] that explains the movement of the sun, moon and other planets around the Earth. The Grand Mufti of Saudi Arabia from 1993 to 1999, Ibn Baz also promoted the geocentric view between 1966 and 1985.
The geocentric (Ptolemaic) model of the solar system is still of interest to planetarium makers, as, for technical reasons, a Ptolemaic-type motion for the planet light apparatus has some advantages over a Copernican-type motion.[75] The celestial sphere, still used for teaching purposes and sometimes for navigation, is also based on a geocentric system[76] which in effect ignores parallax. However this effect is negligible at the scale of accuracy that applies to a planetarium.
All Islamic astronomers from Thabit ibn Qurra in the ninth century to Ibn al-Shatir in the fourteenth, and all natural philosophers from al-Kindi to Averroes and later, are known to have accepted ... the Greek picture of the world as consisting of two spheres of which one, the celestial sphere ... concentrically envelops the other.
Who introduced the swedish nightingale jenny lind to america?
P. T. Barnum🚨Johanna Maria "Jenny" Lind (6 October 1820?ÿ 2 November 1887) was a Swedish opera singer, often known as the "Swedish Nightingale". One of the most highly regarded singers of the 19th century, she performed in soprano roles in opera in Sweden and across Europe, and undertook an extraordinarily popular concert tour of America beginning in 1850. She was a member of the Royal Swedish Academy of Music from 1840.
Lind became famous after her performance in Der Freischtz in Sweden in 1838. Within a few years, she had suffered vocal damage, but the singing teacher Manuel Garca saved her voice. She was in great demand in opera roles throughout Sweden and northern Europe during the 1840s, and was closely associated with Felix Mendelssohn. After two acclaimed seasons in London, she announced her retirement from opera at the age of 29.
In 1850, Lind went to America at the invitation of the showman P. T. Barnum. She gave 93 large-scale concerts for him and then continued to tour under her own management. She earned more than $350,000 from these concerts, donating the proceeds to charities, principally the endowment of free schools in Sweden. With her new husband, Otto Goldschmidt, she returned to Europe in 1852 where she had three children and gave occasional concerts over the next two decades, settling in England in 1855. From 1882, for some years, she was a professor of singing at the Royal College of Music in London.
Born in Klara, in central Stockholm, Lind was the illegitimate daughter of Niclas Jonas Lind (1798ÿ1858), a bookkeeper, and Anne-Marie Fellborg (1793ÿ1856), a schoolteacher.[1] Lind's mother had divorced her first husband for adultery but, for religious reasons, refused to remarry until after his death in 1834. Lind's parents married when she was 14.[1]
Lind's mother ran a day school for girls out of her home. When Lind was about 9, her singing was overheard by the maid of Mademoiselle Lundberg, the principal dancer at the Royal Swedish Opera.[1] The maid, astounded by Lind's extraordinary voice, returned the next day with Lundberg, who arranged an audition and helped her gain admission to the acting school of the Royal Dramatic Theatre, where she studied with Karl Magnus Craelius, the singing master at the theatre.[2]
Lind began to sing onstage when she was 10. She had a vocal crisis at the age of 12 and had to stop singing for a time, but she recovered.[2] Her first great role was Agathe in Weber's Der Freischtz in 1838 at the Royal Swedish Opera.[1] At 20, she was a member of the Royal Swedish Academy of Music and court singer to the King of Sweden and Norway. Her voice became seriously damaged by overuse and untrained singing technique, but her career was saved by the singing teacher Manuel Garca with whom she studied in Paris from 1841 to 1843. He insisted that she should not sing at all for three months, to allow her vocal cords to recover, before he started to teach her a healthy and secure vocal technique.[1][2]
After Lind had been with Garca for a year, the composer Giacomo Meyerbeer, an early and faithful admirer of her talent, arranged an audition for her at the Opra in Paris, but she was rejected. The biographer Francis Rogers concludes that Lind strongly resented the rebuff: when she became an international star, she always refused invitations to sing at the Paris Opra.[3] Lind returned to the Royal Swedish Opera, greatly improved as a singer by Garca's training. She toured Denmark where, in 1843, Hans Christian Andersen met and fell in love with her. Although the two became good friends, she did not reciprocate his romantic feelings. She is believed to have inspired three of his fairy tales: "Beneath the Pillar", "The Angel" and "The Nightingale".[4] He wrote, "No book or personality whatever has exerted a more ennobling influence on me, as a poet, than Jenny Lind. For me she opened the sanctuary of art."[4] The biographer Carol Rosen believes that after Lind rejected Andersen as a suitor, he portrayed her as The Snow Queen with a heart of ice.[1]
In December 1844, through Meyerbeer's influence, Lind was engaged to sing the title role in Bellini's opera Norma in Berlin.[3] That led to more engagements in opera houses throughout Germany and Austria, but such was her success in Berlin that she continued there for four months before she left for other cities.[2] Among her admirers were Robert Schumann, Hector Berlioz and, most importantly for her, Felix Mendelssohn. Ignaz Moscheles wrote: "Jenny Lind has fairly enchanted me... her song with two concertante flutes is perhaps the most incredible feat in the way of bravura singing that can possibly be heard".[5] That number, from Meyerbeer's Ein Feldlager in Schlesien (The Camp of Silesia, 1844, a role written for Lind but not premiered by her) became one of the songs most associated with Lind, and she was called on to sing it wherever she performed in concert.[1] Her operatic repertoire comprised the title roles in Lucia di Lammermoor, Maria di Rohan, Norma, La sonnambula and La vestale as well as Susanna in The Marriage of Figaro, Adina in L'elisir d'amore and Alice in Robert le diable. About that time, she became known as "the Swedish Nightingale". In December 1845, the day after her debut at the Leipzig Gewandhaus under the baton of Mendelssohn, she sang without fee for a charity concert in aid of the Orchestra Widows' Fund. Her devotion and generosity to charitable causes remained a key aspect of her career and greatly enhanced her international popularity even among the unmusical.[1]
At the Royal Swedish Opera, Lind had been friends with the tenor Julius Gnther. They sang together both in opera and on the concert stage and became romantically linked by 1844. Their schedules separated them, however, as Gnther remained in Stockholm and then became a student of Garcia's in Paris in 1846ÿ1847. After reuniting in Sweden, according to Lind's 1891 Memoir, they became engaged to marry in the spring of 1848, just before Lind returned to England. However, the two broke off the engagement in October of the same year.[6]
After a successful season in Vienna, where she was mobbed by admirers and feted by the Imperial Family,[2] Lind traveled to London and gave her first performance there on 4 May 1847, when she appeared in an Italian version of Meyerbeer's Robert le Diable. It was attended by Queen Victoria; the next day, The Times wrote:
We have had frequent experience of the excitement appertaining to "first nights", but we may safely say, and our opinion will be backed by several hundreds of Her Majesty's subjects, that we never witnessed such a scene of enthusiasm as that displayed last night on the occasion of Mademoiselle Jenny Lind's dbut as Alice in an Italian version of Robert le Diable.[7]
In July 1847, she starred in the world premire of Verdi's opera I masnadieri at Her Majesty's Theatre, under the baton of the composer.[8] During her two years on the operatic stage in London, Lind appeared in most of the standard opera repertory.[3] In early 1849, still in her twenties, Lind announced her permanent retirement from opera. Her last opera performance was on 10 May 1849 in Robert le diable; Queen Victoria and other members of the Royal Family were present.[9] Lind's biographer Francis Rogers wrote, "The reasons for her early retirement have been much discussed for nearly a century, but remain today a matter of mystery. Many possible explanations have been advanced, but not one of them has been verified".[3]
In London, Lind's close friendship with Mendelssohn continued. There has been strong speculation that their relationship was more than friendship. Papers confirming that were alleged to exist, but their contents had not been made public.[10] However, in 2013, George Biddlecombe confirmed in the Journal of the Royal Musical Association that "The Committee of the Mendelssohn Scholarship Foundation possesses material indicating that Mendelssohn wrote passionate love letters to Jenny Lind entreating her to join him in an adulterous relationship and threatening suicide as a means of exerting pressure upon her, and that these letters were destroyed on being discovered after her death".[11]
Mendelssohn was present at Lind's London debut in Robert le Diable, and his friend, the critic Henry Fothergill Chorley, who was with him, wrote "I see as I write the smile with which Mendelssohn, whose enjoyment of Mdlle. Lind's talent was unlimited, turned round and looked at me, as if a load of anxiety had been taken off his mind. His attachment to Mademoiselle Lind's genius as a singer was unbounded, as was his desire for her success".[12] Mendelssohn worked with Lind on many occasions and wrote the beginnings of an opera, Lorelei, for her, based on the legend of the Lorelei Rhine maidens; the opera was unfinished at his death. He included a high F sharp in his oratorio Elijah ("Hear Ye Israel") with Lind's voice in mind.[13]
Four months after her London debut, she was devastated by the premature death of Mendelssohn in November 1847. She did not at first feel able to sing the soprano part in Elijah, which he had written for her. She finally did so at a performance in London's Exeter Hall in late 1848, which raised S1,000 to fund a musical scholarship as a memorial to him; it was her first appearance in oratorio.[14] The original intention had been to found a school of music in Mendelssohn's name in Leipzig, but there was not enough support in Leipzig, and with the help of Sir George Smart, Julius Benedict and others, Lind eventually raised enough money to fund a scholarship "to receive pupils of all nations and promote their musical training".[14] The first recipient of the Mendelssohn Scholarship was the 14-year-old Arthur Sullivan, whom Lind encouraged in his career.[1]
In 1849, Lind was approached by the American showman P. T. Barnum with a proposal to tour throughout the United States for more than a year. Realising that it would yield large sums for her favoured charities, particularly the endowment of free schools in her native Sweden, Lind agreed. Her financial demands were stringent, but Barnum met them, and in 1850, they reached agreement.[3]
Together with a supporting baritone, Giovanni Belletti, and her London colleague, Julius Benedict, as pianist, arranger and conductor, Lind sailed to America in September 1850. Barnum's advance publicity made her a celebrity even before she arrived in the US, and she received a wild reception on arriving in New York. Tickets for some of her concerts were in such demand that Barnum sold them by auction. The enthusiasm of the public was so strong that the American press coined the term "Lind mania".[15]
After New York, Lind's party toured the east coast of America, with continued success, and later took in Cuba, the Southern US and Canada. By early 1851, Lind had become uncomfortable with Barnum's relentless marketing of the tour, and she invoked a contractual right to sever her ties with him; they parted amicably. She continued the tour for nearly a year, under her own management, until May 1852. Benedict left the party in 1851 to return to England, and Lind invited Otto Goldschmidt to replace him as pianist and conductor.[3] Lind and Goldschmidt were married on February 5, 1852, near the end of the tour, in Boston. She took the name "Jenny Lind-Goldschmidt", both privately and professionally.
Details of the later concerts under her own management are scarce,[3] but it is known that under Barnum's management Lind gave 93 concerts in America for which she earned about $350,000, and he netted at least $500,000[16] ($9.97?million and $14.2?million, as of 2015, respectively).[17] She donated her profits to her chosen charities, including some US charities.[3][18] The tour is a plot point in the 1980 musical Barnum and the 2017 film The Greatest Showman, both of which include a fictionalized relationship between Lind and Barnum with "romantic undertones".[19]
Lind and Goldschmidt returned to Europe together in May 1852. They lived first in Dresden, Germany, and, from 1855, in England for the rest of their lives.[3] They had three children: Otto, born September 1853 in Germany, Jenny, born March 1857 in England, and Ernest, born January 1861 in England.[1]
Although she refused all requests to appear in opera after her return to Europe, Lind continued to perform in the concert hall. In 1856, at the invitation of the Philharmonic Society conducted by William Sterndale Bennett, she sang the chief soprano part in the first English performance of the cantata Paradise and the Peri by Robert Schumann.[20] In 1866, she gave a concert with Arthur Sullivan at St James's Hall. The Times reported, "there is magic still in that voice... the most perfect singing ÿ perfect alike in expression and in vocalization.... Nothing more engaging, nothing more earnest, nothing more dramatic can be imagined.".[21] At Dsseldorf in January 1870, she sang in "Ruth", an oratorio composed by her husband.[1] When Goldschmidt formed the Bach Choir in 1875, Lind trained the soprano choristers for the first English performance of Bach's B minor Mass, in April 1876, and performed in the mass.[22] Her concerts decreased in frequency until she retired from singing in 1883.[3]
In 1879ÿ1887, Lind worked with Frederick Niecks on his biography of Frdric Chopin.[23] In 1882, she was appointed professor of singing at the newly founded Royal College of Music. She believed in an all-round musical training for her pupils, insisting that, in addition to their vocal studies, they were instructed in solfge, piano, harmony, diction, deportment and at least one foreign language.[24]
She lived her final years at Wynd's Point, Herefordshire, on the Malvern Hills near the British Camp. Her last public appearance was at a charity concert at Royal Malvern Spa in 1883.[1] She died, at 67, at Wynd's Point on 2 November 1887 and was buried in the Great Malvern Cemetery to the music of Chopin's Funeral March. She bequeathed a considerable part of her wealth to help poor Protestant students in Sweden receive an education.[1]
There are no recordings of Lind's voice. She is believed to have made an early phonograph recording for Thomas Edison, but in the words of the critic Philip L. Miller, "Even had the fabled Edison cylinder survived, it would have been too primitive, and she too long retired, to tell us much".[25] The biographer Francis Rogers concludes that although Lind was much admired by Meyerbeer, Mendelssohn, the Schumanns, Berlioz and others, "In voice and in dramatic talent she was undoubtedly inferior to her predecessors, Malibran and Pasta, and to her contemporaries, Sontag and Grisi."[3] He notes that because of her expert promoters, including Barnum, "almost all that was written about her was undoubtedly biased by an almost overwhelming propaganda in her favor, bought and paid for".[3] Rogers says of Mendelssohn and Lind's other admirers that their tastes were "essentially Teutonic" and, except for Meyerbeer, they were not expert in Italian opera, Lind's early specialty. He quotes a critic of the New York Herald, who noted "little deficiencies in execution, in ascending the scale, which even enthusiasm cannot deprive of their sharpness".[3] The American press agreed that Lind's presentation was more typical of Germanic "cold, untouching, icy purity of tone and style", rather than the passionate expression necessary for Italian opera, and the Herald wrote that her style was "suited to please the people of our cold climate. She will have triumphs here that would never attend her progress through France or Italy".[3]
The critic H. F. Chorley, who admired Lind, described her voice as having "two octaves in compass ÿ from D to D ÿ having a higher possible note or two, available on rare occasions;[n 1] and that the lower half of the register and the upper one were of two distinct qualities. The former was not strong ÿ veiled, if not husky; and apt to be out of tune. The latter was rich, brilliant and powerful ÿ finest in its highest portions."[26] Chorley praised her breath management, her use of pianissimo, her taste in ornament and her intelligent use of technique to conceal the differences between her upper and lower registers. He thought her "execution was great" and that she was a "skilled and careful musician" but felt that "many of her effects on the stage appeared overcalculated" and that singing in foreign languages impeded her ability to give expression to the text. He felt, however, that her concert singing was more admirable than her operatic performances, but he praised some of her roles.[3][n 2] Chorley judged her finest work to be in the German repertoire, citing Mozart, Haydn and Mendelssohn's Elijah as best suited to her.[26] Miller concluded that although connoisseurs of the voice preferred other singers, her wider appeal to the public at large was not merely a legend created by Barnum but was a mixture of "a uniquely pure (some called it celestial) quality in her voice, consistent with her well-known generosity and charity".[25]
Lind is commemorated in Poets' Corner, Westminster Abbey, London under the name "Jenny Lind-Goldschmidt". Among those present at the memorial's unveiling ceremony on 20 April 1894 were Goldschmidt, members of the Royal Family, Sullivan, Sir George Grove and representatives of some of the charities supported by Lind.[27] There is also a plaque commemorating Lind in The Boltons, Kensington, London[28] and a blue plaque at 189 Old Brompton Road, London, SW7, which was erected in 1909.[29]
Lind has been commemorated in music, on screen and even on banknotes. Both the 1996 and 2006 issues of the Swedish 50-krona banknote bear a portrait of Lind on the front. Many artistic works have honoured or featured her. Anton Wallerstein composed the "Jenny Lind Polka" around 1850.[30] In the 1930 Hollywood film A Lady's Morals, Grace Moore starred as Lind, with Wallace Beery as Barnum.[31] In 1941 Ilse Werner starred as Lind in the German-language musical biography film The Swedish Nightingale. In 2001, a semibiographical film, Hans Christian Andersen: My Life as a Fairytale, featured Flora Montgomery as Lind. In 2005, Elvis Costello announced that he was writing an opera about Lind, called The Secret Arias with some lyrics by Andersen.[32] A 2010 BBC television documentary "Chopin ÿ The Women Behind the Music" includes discussion of Chopin's last years, during which Lind "so affected" the composer.[33]
Many places and objects have been named for Lind, including Jenny Lind Island in Canada, the Jenny Lind locomotive and a clipper ship, the USS Nightingale. An Australian schooner was named Jenny Lind in her honour. In 1857, it was wrecked in a creek on the Queensland coast; the creek was accordingly named Jenny Lind Creek.[34]
In Britain, Goldschmidt's endowment of an infirmary for children in her memory in Norwich is perpetuated in its present form as the Jenny Lind Children's Hospital of the Norfolk and Norwich University Hospital.[35] There is a Jenny Lind Park in the same city.[36] A chapel is named for Lind at the University of Worcester City Campus[37] and in Andover, Illinois.[38] A hotel and pub is named after her in the Old Town of Hastings, East Sussex.[39] Hereford County Hospital has a psychiatric ward named for Jenny Lind.[40] A district in Glasgow is named after her.[41]
In the US, Lind is commemorated by street names in Fort Smith, Arkansas; New Bedford, Massachusetts; Taunton, Massachusetts; McKeesport, Pennsylvania; North Easton, Massachusetts; North Highlands, California and Stanhope, New Jersey; and in the name of the gold-rush town of Jenny Lind, California. An elementary school in Minneapolis, Minnesota is named after her.[42] She has been honoured since 1948 by the Barnum Festival, which takes place each June and July in Bridgeport, Connecticut. Through a national competition, the festival selects a soprano as the Jenny Lind winner. Her Swedish counterpart, chosen by the Royal Swedish Academy of Music and the People's Parks and Community Centre in Stockholm, visits during the festival, and the two perform several concerts together. In July, the American Jenny Lind winner traditionally travels to Sweden for a similar joint concert tour.[citation needed]
A bronze statue of a seated Jenny Lind by Erik Rafael-R?dberg, dedicated in 1924, sits in the Framn?s section of Djurg?rden island in Stockholm (at 591945N 1868E? / ?59.32917N 18.10222E? / 59.32917; 18.10222).[43][44]
Notes
Footnotes
Sources
What is the average batting average in mlb?
between .260 and .275🚨Batting average is a statistic in cricket, baseball, and softball that measures the performance of batsmen in cricket and batters in baseball. The development of the baseball statistic was influenced by the cricket statistic.[1]
In cricket, a player's batting average is the total number of runs they have scored divided by the number of times they have been out. Since the number of runs a player scores and how often they get out are primarily measures of their own playing ability, and largely independent of their teammates, batting average is a good metric for an individual player's skill as a batsman. The number is also simple to interpret intuitively. If all the batsman's innings were completed (i.e. they were out every innings), this is the average number of runs they score per innings. If they did not complete all their innings (i.e. some innings they finished not out), this number is an estimate of the unknown average number of runs they score per innings. Batting average has been used to gauge cricket players' relative skills since the 18th century.
Most players have career batting averages in the range of 20 to 40. This is also the desirable range for wicket-keepers, though some fall short and make up for it with keeping skill. Until a substantial increase in scores in the 21st century due to improved bats and smaller grounds among other factors, players who sustained an average above 50 through a career were considered exceptional, and before the development of the heavy roller in the 1870s (which allowed for a flatter, safer cricket pitch) an average of 25 was considered very good.[2]
Career records for batting average are usually subject to a minimum qualification of 20 innings played or completed, in order to exclude batsmen who have not played enough games for their skill to be reliably assessed. Under this qualification, the highest Test batting average belongs to Australia's Sir Donald Bradman, with 99.94. Given that a career batting average over 50 is exceptional, and that only four other players have averages over 60, this is an outstanding statistic. The fact that Bradman's average is so far above that of any other cricketer has led several statisticians to argue that, statistically at least, he was the greatest sportsman in any sport.[3] As at 21 October 2016, Adam Voges of Australia has recorded an average of 72.75 from 27 innings played, but only 20 innings completed.
Batting averages in One Day International (ODI) cricket tend to be lower than in Test cricket,[4] because of the need to score runs more quickly and take riskier strokes and the lesser emphasis on building a large innings. It should also be remembered, especially in relation to the ODI histogram above, that there were no ODI competitions when Bradman played.
If a batsman has been dismissed in every single innings, then their total number of runs scored divided by the number of times they have been out gives exactly the average number of runs they score per innings. However, for a batsman with innings which have finished not out, this statistic is only an estimate of the average number of runs they score per innings ÿ the true average number of runs they score per innings is unknown as it is not known how many runs they would have scored if they could have completed all their not out innings. If their scores have a geometric distribution then total number of runs scored divided by the number of times out is the maximum likelihood estimate of their true unknown average.[5]
Batting averages can be strongly affected by the number of not outs. For example, Phil Tufnell, who was noted for his poor batting,[6] has an apparently respectable ODI average of 15 (from 20 games), despite a highest score of only 5 not out, as he scored an overall total of 15 runs from 10 innings, but was out only once.[7]
A different, and more recently developed, statistic which is also used to gauge the effectiveness of batsmen is the strike rate. It measures a different concept however ÿ how quickly the batsman scores (number of runs from 100 balls) ÿ so it does not supplant the role of batting average. It is used particularly in limited overs matches, where the speed at which a batsman scores is more important than it is in first-class cricket.
(Source: Cricinfo Statsguru 23 December 2016)
Table shows players with at least 20 innings completed.
* denotes not out.
In baseball, the batting average (BA) is defined by the number of hits divided by at bats. It is usually reported to three decimal places and read without the decimal: A player with a batting average of .300 is "batting three-hundred." If necessary to break ties, batting averages could be taken beyond the .001 measurement. In this context, a .001 is considered a "point," such that a .235 batter is 5 points higher than a .230 batter.
Henry Chadwick, an English statistician raised on cricket, was an influential figure in the early history of baseball. In the late 19th century he adapted the concept behind the cricket batting average to devise a similar statistic for baseball. Rather than simply copy cricket's formulation of runs scored divided by outs, he realized that hits divided by at bats would provide a better measure of individual batting ability. This is because while in cricket, scoring runs is almost entirely dependent on one's own batting skill, in baseball it is largely dependent on having other good hitters on one's team. Chadwick noted that hits are independent of teammates' skills, so used this as the basis for the baseball batting average. His reason for using at bats rather than outs is less obvious, but it leads to the intuitive idea of the batting average being a percentage reflecting how often a batter gets on base, whereas in contrary, hits divided by outs is not as simple to interpret in real terms.
In modern times, a season batting average higher than .300 is considered to be excellent, and an average higher than .400 a nearly unachievable goal. The last player to do so, with enough plate appearances to qualify for the batting championship, was Ted Williams of the Boston Red Sox, who hit .406 in 1941, though the best modern players either threaten to or actually do achieve it occasionally, if only for brief periods of time. There have been numerous attempts to explain the disappearance of the .400 hitter, with one of the more rigorous discussions of this question appearing in Stephen Jay Gould's 1996 book Full House.
Ty Cobb holds the record for highest career batting average with .366, 9 points higher than Rogers Hornsby who has the second highest average in history at .358. The record for lowest career batting average for a player with more than 2,500 at-bats belongs to Bill Bergen, a catcher who played from 1901 to 1911 and recorded a .170 average in 3,028 career at-bats. The modern-era record for highest batting average for a season is held by Napoleon Lajoie, who hit .426 in 1901, the first year of play for the American League. The modern-era record for lowest batting average for a player that qualified for the batting title is held by Rob Deer, who hit .179 in 1991. While finishing six plate appearances short of qualifying for the batting title, Adam Dunn of the Chicago White Sox hit .159 for the 2011 season, twenty points (and 11.2%) lower than the record. The highest batting average for a rookie was .408 in 1911 by Shoeless Joe Jackson.
For non-pitchers, a batting average below .230 is often considered poor, and one below .200 is usually unacceptable. This latter level is sometimes referred to as "The Mendoza Line", named for Mario Mendoza (a lifetime .215 hitter), a stellar defensive shortstop whose defensive capabilities just barely made up for his offensive shortcomings. The league batting average in Major League Baseball for 2016 was .255,[9] and the all-time league average is between .260 and .275.
In rare instances, MLB players have concluded their careers with a perfect batting average of 1.000. John Paciorek had three hits in all three of his turns at bat. Esteban Yan went two-for-two, including a home run. Hal Deviney's two hits in his only plate appearances included a triple, while Steve Biras, Mike Hopkins, Chet Kehn, Jason Roach and Fred Schemanske also went two-for-two. A few dozen others have hit safely in their one and only career at-bat.
Sabermetrics, the study of baseball statistics, considers batting average a weak measure of performance because it does not correlate as well as other measures to runs scored, thereby causing it to have little predictive value. Batting average does not take into account walks or power, whereas other statistics such as on-base percentage and slugging percentage have been specifically designed to measure such concepts. Adding these statistics together form a player's On-base plus slugging or "OPS". This is commonly seen as a much better, though not perfect, indicator of a player's overall batting ability as it is a measure of hitting for average, hitting for power and drawing bases on balls.
In 1887, Major League Baseball counted bases on balls as hits. The result of this was skyrocketed batting averages, including some near .500, and the experiment was abandoned the following season.
The Major League Baseball batting averages championships (often referred to as "the batting title") is awarded annually to the player in each league who has the highest batting average. Ty Cobb holds the MLB (and American League) record for most batting titles, officially winning 11 in his pro career.[10] The National League record of 8 batting titles is shared by Honus Wagner and Tony Gwynn. Most of Cobb's career and all of Wagner's career took place in what is known as the Dead-Ball Era, which was characterized by higher batting averages and much less power, whereas Gwynn's career took place in the Live-Ball Era.
To determine which players are eligible to win the batting title, the following conditions have been used over the sport's history:[11]
From 1967 to the present, if the player with the highest average in a league fails to meet the minimum plate-appearance requirement, the remaining at-bats until qualification (e.g., 5 ABs, if the player finished the season with 497 plate appearances) are hypothetically considered hitless at-bats; if his recalculated batting average still tops the league, he is awarded the title. This is officially called Rule 10.22(a), but it is also known as the Tony Gwynn rule because the Padres' legend won the batting crown in 1996 with a .353 average on just 498 plate appearances (i.e., was four shy). Gwynn was awarded the title since he would have led the league even if he'd gone 0-for-4 in those missing plate appearances. His average would have dropped to .349, five points better than second-place Ellis Burks' .344.[12] In 2012, a one-time amendment to the rule was made to disqualify Melky Cabrera from the title. Cabrera requested that he be disqualified after serving a suspension that season for a positive testosterone test. He had batted .346 with 501 plate appearances, and the original rule would have awarded him the title over San Francisco Giants teammate Buster Posey, who won batting .336.[13][14]
Note: Batting averages are normally rounded to 3 decimal places. Extra detail here is used for tie-breaker.
Following from usage in cricket and baseball, batting average has come to be used for other statistical measures of performance and in the general usage on how a person did in a wide variety of actions.
An example is the Internet Archive, which uses the term in ranking downloads. Its "batting average" indicates the correlation between views of a description page of a downloadable item, and the number of actual downloads of the item. This avoids the effect of popular downloads by volume swamping potentially more focused and useful downloads, producing an arguably more useful ranking.
What was the capital of the maurya empire?
Pataliputra (modern Patna)🚨The Maurya Empire was a geographically extensive Iron Age historical power founded by Chandragupta Maurya which dominated ancient India between 322 BCE and 187 BCE. Extending into the kingdom of Magadha in the Indo-Gangetic Plain in the eastern side of the Indian subcontinent, the empire had its capital city at Pataliputra (modern Patna).[2][3] The empire was the largest to have ever existed in the Indian subcontinent, spanning over 5?million square kilometres (1.9?million square miles) at its zenith under Ashoka.
Chandragupta Maurya raised an army and with the assistance of Chanakya (also known as Kau?ilya),[4] overthrew the Nanda Empire in c.?322 BCE and rapidly expanded his power westwards across central and western India. By 317?BCE the empire had fully occupied Northwestern India, defeating and conquering the satraps left by Alexander the Great.[5] Chandragupta then defeated the invasion led by Seleucus I, a Macedonian general from Alexander's army, gaining additional territory west of the Indus River.[6]
The Maurya Empire was one of the largest empires of the world in its time. At its greatest extent, the empire stretched to the north along the natural boundaries of the Himalayas, to the east into Assam, to the west into Balochistan (southwest Pakistan and southeast Iran) and the Hindu Kush mountains of what is now Afghanistan.[7] The Empire was expanded into India's central and southern regions[8][9] by the emperors Chandragupta and Bindusara, but it excluded Kalinga (modern Odisha), until it was conquered by Ashoka.[10] It declined for about 50?years after Ashoka's rule ended, and it dissolved in 185?BCE with the foundation of the Shunga dynasty in Magadha.
Under Chandragupta Maurya and his successors, internal and external trade, agriculture, and economic activities all thrived and expanded across India thanks to the creation of a single and efficient system of finance, administration, and security. After the Kalinga War, the Empire experienced nearly half a century of peace and security under Ashoka. Mauryan India also enjoyed an era of social harmony, religious transformation, and expansion of the sciences and of knowledge. Chandragupta Maurya's embrace of Jainism increased social and religious renewal and reform across his society, while Ashoka's embrace of Buddhism has been said to have been the foundation of the reign of social and political peace and non-violence across all of India. Ashoka sponsored the spreading of Buddhist missionaries into Sri Lanka, Southeast Asia, West Asia, North Africa, and Mediterranean Europe.[11]
The population of the empire has been estimated to be about 50ÿ60 million, making the Mauryan Empire one of the most populous empires of Antiquity.[12][13] Archaeologically, the period of Mauryan rule in South Asia falls into the era of Northern Black Polished Ware (NBPW). The Arthashastra[14] and the Edicts of Ashoka are the primary sources of written records of Mauryan times. The Lion Capital of Ashoka at Sarnath has been made the national emblem of India.
The Maurya Empire was founded by Chandragupta Maurya, with help from Chanakya, at Takshashila. According to several legends, Chanakya travelled to Magadha, a kingdom that was large and militarily powerful and feared by its neighbours, but was insulted by its king Dhana Nanda, of the Nanda dynasty. Chanakya swore revenge and vowed to destroy the Nanda Empire.[15] Meanwhile, the conquering armies of Alexander the Great refused to cross the Beas River and advance further eastward, deterred by the prospect of battling Magadha. Alexander returned to Babylon and re-deployed most of his troops west of the Indus River. Soon after Alexander died in Babylon in 323?BCE, his empire fragmented into independent kingdoms led by his generals.[16]
The Greek generals Eudemus and Peithon ruled in the Indus Valley until around 317?BCE, when Chandragupta Maurya (with the help of Chanakya, who was now his advisor) orchestrated a rebellion to drive out the Greek governors, and subsequently brought the Indus Valley under the control of his new seat of power in Magadha.[5]
Chandragupta Maurya's rise to power is shrouded in mystery and controversy. On one hand, a number of ancient Indian accounts, such as the drama Mudrarakshasa (Signet ring of Rakshasa ÿ Rakshasa was the prime minister of Magadha) by Vishakhadatta, describe his royal ancestry and even link him with the Nanda family. A kshatriya clan known as the Maurya's are referred to in the earliest Buddhist texts, Mahaparinibbana Sutta. However, any conclusions are hard to make without further historical evidence. Chandragupta first emerges in Greek accounts as "Sandrokottos". As a young man he is said to have met Alexander.[17] He is also said to have met the Nanda king, angered him, and made a narrow escape.[18] Chanakya's original intentions were to train a guerilla army under Chandragupta's command.
Chanakya encouraged Chandragupta Maurya and his army to take over the throne of Magadha. Using his intelligence network, Chandragupta gathered many young men from across Magadha and other provinces, men upset over the corrupt and oppressive rule of king Dhana Nanda, plus the resources necessary for his army to fight a long series of battles. These men included the former general of Taxila, accomplished students of Chanakya, the representative of King Parvataka, his son Malayaketu, and the rulers of small states. The Macedonians (described as Yona or Yavana in Indian sources) may then have participated, together with other groups, in the armed uprising of Chandragupta Maurya against the Nanda dynasty. The Mudrarakshasa of Visakhadutta as well as the Jaina work Parisishtaparvan talk of Chandragupta's alliance with the Himalayan king Parvataka, often identified with Porus,[19][20] although this identification is not accepted by all historians.[21] This Himalayan alliance gave Chandragupta a composite and powerful army made up of Yavanas (Greeks), Kambojas, Shakas (Scythians), Kiratas (Himalayans), Parasikas (Persians) and Bahlikas (Bactrians) who took Pataliputra (also called Kusumapura, "The City of Flowers"):[22][23]
Preparing to invade Pataliputra, Maurya came up with a strategy. A battle was announced and the Magadhan army was drawn from the city to a distant battlefield to engage with Maurya's forces. Maurya's general and spies meanwhile bribed the corrupt general of Nanda. He also managed to create an atmosphere of civil war in the kingdom, which culminated in the death of the heir to the throne. Chanakya managed to win over popular sentiment. Ultimately Nanda resigned, handing power to Chandragupta, and went into exile and was never heard of again. Chanakya contacted the prime minister, Rakshasas, and made him understand that his loyalty was to Magadha, not to the Nanda dynasty, insisting that he continue in office. Chanakya also reiterated that choosing to resist would start a war that would severely affect Magadha and destroy the city. Rakshasa accepted Chanakya's reasoning, and Chandragupta Maurya was legitimately installed as the new King of Magadha. Rakshasa became Chandragupta's chief advisor, and Chanakya assumed the position of an elder statesman.
The approximate extent of the Magadha state in the 5th century BCE.
The Maurya Empire when it was first founded by Chandragupta Maurya c. 320?BCE, after conquering the Nanda Empire when he was only about 20?years old.
Chandragupta extended the borders of the Maurya Empire towards Seleucid Persia after defeating Seleucus c. 305?BCE.[25]
Bindusara extended the borders of the empire southward into the Deccan Plateau c. 300?BCE.[26]
Ashoka extended into Kalinga during the Kalinga War c. 265?BCE, and established superiority over the southern kingdoms.
Hermann Kulke and Dietmar Rothermund believe that Ashoka's empire did not include large parts of India, which were controlled by autonomous tribes.[27]
In 305 BCE, Chandragupta led a series of campaigns to retake the satrapies left behind by Alexander the Great when he returned westwards, while Seleucus I Nicator fought to defend these territories. The two rulers concluded a peace treaty in 303 BCE, including a marital alliance. Chandragupta snatched the satrapies of Paropamisade (Kamboja and Gandhara), Arachosia (Kandhahar) and Gedrosia (Balochistan), and Seleucus I Nicator received 500 war elephants that were to have a decisive role in his victory against western Hellenistic kings at the Battle of Ipsus in 301?BCE. Diplomatic relations were established and several Greeks, such as the historian Megasthenes, Deimakos and Dionysius resided at the Mauryan court. Megasthenes in particular was a notable Greek ambassador in the court of Chandragupta Maurya.[28] According to Arrian, ambassador Megasthenes (c.350ÿc.290 BCE) lived in Arachosia and travelled to Pataliputra.[29]
Chandragupta established a strong centralized state with an administration at Pataliputra, which, according to Megasthenes, was "surrounded by a wooden wall pierced by 64 gates and 570 towers". Aelian, although not expressly quoting Megasthenes nor mentionning Pataliputra, described Indian palaces as superior in splendor to Persia's Susa or Ectabana.[30] The architecture of the city seems to have had many similarities with Persian cities of the period.[31]
Chandragupta's son Bindusara extended the rule of the Mauryan empire towards southern India. The famous Tamil poet Mamulanar of the Sangam literature described how the Deccan Plateau was invaded by the Maurya army.[32] He also had a Greek ambassador at his court, named Megasthenes.[33]
Megasthenes describes a disciplined multitude under Chandragupta, who live simply, honestly, and do not know writing:
Chandragupta renounced his throne and followed Jain teacher Bhadrabahu.[34][35][36] He is said to have lived as an ascetic at Shravanabelagola for several years before fasting to death, as per the Jain practice of sallekhana.[37]
Bindusara was born to Chandragupta, the founder of the Mauryan Empire. This is attested by several sources, including the various Puranas and the Mahavamsa.[38] He is attested by the Buddhist texts such as Dipavamsa and Mahavamsa ("Bindusaro"); the Jain texts such as Parishishta-Parvan; as well as the Hindu texts such as Vishnu Purana ("Vindusara").[39][40] According to the 12th century Jain writer Hemachandra's Parishishta-Parvan, the name of Bindusara's mother was Durdhara.[41] Some Greek sources also mention him by the name "Amitrochates" or its variations.[42][43]
Historian Upinder Singh estimates that Bindusara ascended the throne around 297 BCE.[44] Bindusara, just 22?years?old, inherited a large empire that consisted of what is now, Northern, Central and Eastern parts of India along with parts of Afghanistan and Baluchistan. Bindusara extended this empire to the southern part of India, as far as what is now known as Karnataka. He brought sixteen states under the Mauryan Empire and thus conquered almost all of the Indian peninsula (he is said to have conquered the 'land between the two seas' ÿ the peninsular region between the Bay of Bengal and the Arabian Sea). Bindusara didn't conquer the friendly Tamil kingdoms of the Cholas, ruled by King Ilamcetcenni, the Pandyas, and Cheras. Apart from these southern states, Kalinga (modern Odisha) was the only kingdom in India that didn't form the part of Bindusara's empire.[45] It was later conquered by his son Ashoka, who served as the viceroy of Ujjaini during his father's reign, which highlights the importance of the town.[46][47]
Bindusara's life has not been documented as well as that of his father Chandragupta or of his son Ashoka. Chanakya continued to serve as prime minister during his reign. According to the medieval Tibetan scholar Taranatha who visited India, Chanakya helped Bindusara "to destroy the nobles and kings of the sixteen kingdoms and thus to become absolute master of the territory between the eastern and western oceans."[48] During his rule, the citizens of Taxila revolted twice. The reason for the first revolt was the maladministration of Susima, his eldest son. The reason for the second revolt is unknown, but Bindusara could not suppress it in his lifetime. It was crushed by Ashoka after Bindusara's death.[49]
Bindusara maintained friendly diplomatic relations with the Hellenic World. Deimachus was the ambassador of Seleucid emperor Antiochus I at Bindusara's court.[50] Diodorus states that the king of Palibothra (Pataliputra, the Mauryan capital) welcomed a Greek author, Iambulus. This king is usually identified as Bindusara.[50] Pliny states that the Egyptian king Philadelphus sent an envoy named Dionysius to India.[51][52] According to Sailendra Nath Sen, this appears to have happened during Bindusara's reign.[50]
Unlike his father Chandragupta (who at a later stage converted to Jainism), Bindusara believed in the Ajivika sect. Bindusara's guru Pingalavatsa (Janasana) was a Brahmin[53] of the Ajivika sect. Bindusara's wife, Queen Subhadrangi (Queen Aggamahesi) was a Brahmin[54] also of the Ajivika sect from Champa (present Bhagalpur district). Bindusara is credited with giving several grants to Brahmin monasteries (Brahmana-bhatto).[55]
Historical evidence suggests that Bindusara died in the 270s BCE. According to Upinder Singh, Bindusara died around 273 BCE.[44] Alain Danilou believes that he died around 274 BCE.[56] Sailendra Nath Sen believes that he died around 273-272 BCE, and that his death was followed by a four-year struggle of succession, after which his son Ashoka became the emperor in 269-268 BCE.[50] According to the Mahavamsa, Bindusara reigned for 28 years.[57] The Vayu Purana, which names Chandragupta's successor as "Bhadrasara", states that he ruled for 25 years.[58]
As a young prince, Ashoka (r.?272ÿ232?BCE) was a brilliant commander who crushed revolts in Ujjain and Takshashila. As monarch he was ambitious and aggressive, re-asserting the Empire's superiority in southern and western India. But it was his conquest of Kalinga (262ÿ261?BCE) which proved to be the pivotal event of his life. Ashoka used Kalinga to project power over a large region by building a fortification there and securing it as a possession.[59] Although Ashoka's army succeeded in overwhelming Kalinga forces of royal soldiers and civilian units, an estimated 100,000 soldiers and civilians were killed in the furious warfare, including over 10,000 of Ashoka's own men. Hundreds of thousands of people were adversely affected by the destruction and fallout of war. When he personally witnessed the devastation, Ashoka began feeling remorse. Although the annexation of Kalinga was completed, Ashoka embraced the teachings of Buddhism, and renounced war and violence. He sent out missionaries to travel around Asia and spread Buddhism to other countries.[citation needed]
Ashoka implemented principles of ahimsa by banning hunting and violent sports activity and ending indentured and forced labor (many thousands of people in war-ravaged Kalinga had been forced into hard labour and servitude). While he maintained a large and powerful army, to keep the peace and maintain authority, Ashoka expanded friendly relations with states across Asia and Europe, and he sponsored Buddhist missions. He undertook a massive public works building campaign across the country. Over 40?years of peace, harmony and prosperity made Ashoka one of the most successful and famous monarchs in Indian history. He remains an idealized figure of inspiration in modern India.[citation needed]
The Edicts of Ashoka, set in stone, are found throughout the Subcontinent. Ranging from as far west as Afghanistan and as far south as Andhra (Nellore District), Ashoka's edicts state his policies and accomplishments. Although predominantly written in Prakrit, two of them were written in Greek, and one in both Greek and Aramaic. Ashoka's edicts refer to the Greeks, Kambojas, and Gandharas as peoples forming a frontier region of his empire. They also attest to Ashoka's having sent envoys to the Greek rulers in the West as far as the Mediterranean. The edicts precisely name each of the rulers of the Hellenic world at the time such as Amtiyoko (Antiochus), Tulamaya (Ptolemy), Amtikini (Antigonos), Maka (Magas) and Alikasudaro (Alexander) as recipients of Ashoka's proselytism.[citation needed] The Edicts also accurately locate their territory "600 yojanas away" (a yojanas being about 7?miles), corresponding to the distance between the center of India and Greece (roughly 4,000?miles).[60]
Ashoka was followed for 50?years by a succession of weaker kings. Brihadratha, the last ruler of the Mauryan dynasty, held territories that had shrunk considerably from the time of emperor Ashoka. Brihadratha was assassinated in 185?BCE during a military parade by the Brahmin general Pushyamitra Shunga, commander-in-chief of his guard, who then took over the throne and established the Shunga dynasty.[61]
Buddhist records such as the Ashokavadana write that the assassination of Brihadratha and the rise of the Shunga empire led to a wave of religious persecution for Buddhists,[62] and a resurgence of Hinduism. According to Sir John Marshall,[63] Pushyamitra may have been the main author of the persecutions, although later Shunga kings seem to have been more supportive of Buddhism. Other historians, such as Etienne Lamotte[64] and Romila Thapar,[65] among others, have argued that archaeological evidence in favour of the allegations of persecution of Buddhists are lacking, and that the extent and magnitude of the atrocities have been exaggerated.
The fall of the Mauryas left the Khyber Pass unguarded, and a wave of foreign invasion followed. The Greco-Bactrian king, Demetrius, capitalized on the break-up, and he conquered southern Afghanistan and parts of northwestern India around 180?BCE, forming the Indo-Greek Kingdom. The Indo-Greeks would maintain holdings on the trans-Indus region, and make forays into central India, for about a century. Under them, Buddhism flourished, and one of their kings, Menander, became a famous figure of Buddhism; he was to establish a new capital of Sagala, the modern city of Sialkot. However, the extent of their domains and the lengths of their rule are subject to much debate. Numismatic evidence indicates that they retained holdings in the subcontinent right up to the birth of Christ. Although the extent of their successes against indigenous powers such as the Shungas, Satavahanas, and Kalingas are unclear, what is clear is that Scythian tribes, renamed Indo-Scythians, brought about the demise of the Indo-Greeks from around 70?BCE and retained lands in the trans-Indus, the region of Mathura, and Gujarat.[citation needed]
The Empire was divided into four provinces, with the imperial capital at Pataliputra. From Ashokan edicts, the names of the four provincial capitals are Tosali (in the east), Ujjain (in the west), Suvarnagiri (in the south), and Taxila (in the north). The head of the provincial administration was the Kumara (royal prince), who governed the provinces as king's representative. The kumara was assisted by Mahamatyas and council of ministers. This organizational structure was reflected at the imperial level with the Emperor and his Mantriparishad (Council of Ministers).[citation needed]
Historians theorise that the organisation of the Empire was in line with the extensive bureaucracy described by Kautilya in the Arthashastra: a sophisticated civil service governed everything from municipal hygiene to international trade. The expansion and defense of the empire was made possible by what appears to have been one of the largest armies in the world during the Iron Age.[66] According to Megasthenes, the empire wielded a military of 600,000 infantry, 30,000 cavalry, 8,000 chariots and 9,000 war elephants besides followers and attendants.[67] A vast espionage system collected intelligence for both internal and external security purposes. Having renounced offensive warfare and expansionism, Ashoka nevertheless continued to maintain this large army, to protect the Empire and instil stability and peace across West and South Asia.[citation needed]
For the first time in South Asia, political unity and military security allowed for a common economic system and enhanced trade and commerce, with increased agricultural productivity. The previous situation involving hundreds of kingdoms, many small armies, powerful regional chieftains, and internecine warfare, gave way to a disciplined central authority. Farmers were freed of tax and crop collection burdens from regional kings, paying instead to a nationally administered and strict-but-fair system of taxation as advised by the principles in the Arthashastra. Chandragupta Maurya established a single currency across India, and a network of regional governors and administrators and a civil service provided justice and security for merchants, farmers and traders. The Mauryan army wiped out many gangs of bandits, regional private armies, and powerful chieftains who sought to impose their own supremacy in small areas. Although regimental in revenue collection, Maurya also sponsored many public works and waterways to enhance productivity, while internal trade in India expanded greatly due to new-found political unity and internal peace.[citation needed]
Under the Indo-Greek friendship treaty, and during Ashoka's reign, an international network of trade expanded. The Khyber Pass, on the modern boundary of Pakistan and Afghanistan, became a strategically important port of trade and intercourse with the outside world. Greek states and Hellenic kingdoms in West Asia became important trade partners of India. Trade also extended through the Malay peninsula into Southeast Asia. India's exports included silk goods and textiles, spices and exotic foods. The external world came across new scientific knowledge and technology with expanding trade with the Mauryan Empire. Ashoka also sponsored the construction of thousands of roads, waterways, canals, hospitals, rest-houses and other public works. The easing of many over-rigorous administrative practices, including those regarding taxation and crop collection, helped increase productivity and economic activity across the Empire.[citation needed]
In many ways, the economic situation in the Mauryan Empire is analogous to the Roman Empire of several centuries later. Both had extensive trade connections and both had organizations similar to corporations. While Rome had organizational entities which were largely used for public state-driven projects, Mauryan India had numerous private commercial entities. These existed purely for private commerce and developed before the Mauryan Empire itself.[68][unreliable source?]
Hoard of mostly Mauryan coins.
Silver punch mark coin of the Maurya empire, with symbols of wheel and elephant. 3rd century BCE.[citation needed]
Mauryan coin with arched hill symbol on reverse.[citation needed]
Mauryan Empire coin. Circa late 4th-2nd century BCE.[citation needed]
Mauryan Empire, Emperor Salisuka or later. Circa 207-194 BCE.[69]
Magadha, the centre of the empire, was also the birthplace of Buddhism. Ashoka initially practised Hinduism but later embraced Buddhism; following the Kalinga War, he renounced expansionism and aggression, and the harsher injunctions of the Arthashastra on the use of force, intensive policing, and ruthless measures for tax collection and against rebels. Ashoka sent a mission led by his son Mahinda and daughter Sanghamitta to Sri Lanka, whose king Tissa was so charmed with Buddhist ideals that he adopted them himself and made Buddhism the state religion. Ashoka sent many Buddhist missions to West Asia, Greece and South East Asia, and commissioned the construction of monasteries and schools, as well as the publication of Buddhist literature across the empire. He is believed to have built as many as 84,000 stupas across India, such as Sanchi and Mahabodhi Temple, and he increased the popularity of Buddhism in Afghanistan, Thailand and North Asia including Siberia. Ashoka helped convene the Third Buddhist Council of India's and South Asia's Buddhist orders near his capital, a council that undertook much work of reform and expansion of the Buddhist religion. Indian merchants embraced Buddhism and played a large role in spreading the religion across the Mauryan Empire.[70]
Chandragupta Maurya embraced Jainism after retiring, when he renounced his throne and material possessions to join a wandering group of Jain monks. Chandragupta was a disciple of the Jain monk Bhadrabahu. It is said that in his last days, he observed the rigorous but self-purifying Jain ritual of santhara (fast unto death), at Shravana Belgola in Karnataka.[37][36][71][35] However, his successor, Bindusara, was a follower of another ascetic movement, jؐvika,[72] and distanced himself from Jain and Buddhist movements.[citation needed] Samprati, the grandson of Ashoka, also embraced Jainism. Samprati was influenced by the teachings of Jain monks and he is known to have built 125,000 derasars across India. Some of them are still found in the towns of Ahmedabad, Viramgam, Ujjain, and Palitana. It is also said that just like Ashoka, Samprati sent messengers and preachers to Greece, Persia and the Middle East for the spread of Jainism, but, to date, no research has been done in this area.[73][74]
Thus, Jainism became a vital force under the Mauryan Rule. Chandragupta and Samprati are credited for the spread of Jainism in South India. Hundreds of thousands of temples and stupas are said to have been erected during their reigns. However, due to lack of royal patronage, its own strict principles, and the rise of Shankaracharya and Ramanuja, Jainism, once a major religion of southern India, began to decline.[citation needed]
The greatest monument of this period, executed in the reign of Chandragupta Maurya, was the old palace at the site of Kumhrar. Excavations at the site of Kumhrar nearby have unearthed the remains of the palace. The palace is thought to have been an aggregate of buildings, the most important of which was an immense pillared hall supported on a high substratum of timbers. The pillars were set in regular rows, thus dividing the hall into a number of smaller square bays. The number of columns is 80, each about 7 meters high. According to the eyewitness account of Megasthenes, the palace was chiefly constructed of timber, and was considered to exceed in splendour and magnificence the palaces of Susa and Ecbatana, its gilded pillars being adorned with golden vines and silver birds. The buildings stood in an extensive park studded with fish ponds and furnished with a great variety of ornamental trees and shrubs.[75][better?source?needed] Kau?ilya's Arthashastra also gives the method of palace construction from this period. Later fragments of stone pillars, including one nearly complete, with their round tapering shafts and smooth polish, indicate that Ashoka was responsible for the construction of the stone columns which replaced the earlier wooden ones.[citation needed]
During the Ashokan period, stonework was of a highly diversified order and comprised lofty free-standing pillars, railings of stupas, lion thrones and other colossal figures. The use of stone had reached such great perfection during this time that even small fragments of stone art were given a high lustrous polish resembling fine enamel. This period marked the beginning of the Buddhist school of architecture. Ashoka was responsible for the construction of several stupas, which were large domes and bearing symbols of Buddha. The most important ones are located at Sanchi, Bharhut, Amaravati, Bodhgaya and Nagarjunakonda. The most widespread examples of Mauryan architecture are the Ashoka pillars and carved edicts of Ashoka, often exquisitely decorated, with more than 40 spread throughout the Indian subcontinent.[76][better?source?needed]
The peacock was a dynastic symbol of Mauryans, as depicted by Ashoka's pillars at Nandangarh and Sanchi Stupa.[77]
Remains of the Ashokan Pillar in polished stone, to the right of the Southern Gateway.
Remains of the shaft of the pillar of Ashoka, under a shed near the Southern Gateway.
The Sanchi pillar capital of Ashoka as discovered (left), and simulation of original appearance (right).[78] Flame palmettes and geese adorn the abacus.
The protection of animals in India became serious business by the time of the Maurya dynasty; being the first empire to provide a unified political entity in India, the attitude of the Mauryas towards forests, their denizens, and fauna in general is of interest.[79]
The Mauryas firstly looked at forests as resources. For them, the most important forest product was the elephant. Military might in those times depended not only upon horses and men but also battle-elephants; these played a role in the defeat of Seleucus, one of Alexander's former generals. The Mauryas sought to preserve supplies of elephants since it was cheaper and took less time to catch, tame and train wild elephants than to raise them. Kautilya's Arthashastra contains not only maxims on ancient statecraft, but also unambiguously specifies the responsibilities of officials such as the Protector of the Elephant Forests.[80]
On the border of the forest, he should establish a forest for elephants guarded by foresters. The Office of the Chief Elephant Forester should with the help of guards protect the elephants in any terrain. The slaying of an elephant is punishable by death.
The Mauryas also designated separate forests to protect supplies of timber, as well as lions and tigers for skins. Elsewhere the Protector of Animals also worked to eliminate thieves, tigers and other predators to render the woods safe for grazing cattle.[citation needed]
The Mauryas valued certain forest tracts in strategic or economic terms and instituted curbs and control measures over them. They regarded all forest tribes with distrust and controlled them with bribery and political subjugation. They employed some of them, the food-gatherers or aranyaca to guard borders and trap animals. The sometimes tense and conflict-ridden relationship nevertheless enabled the Mauryas to guard their vast empire.[81]
When Ashoka embraced Buddhism in the latter part of his reign, he brought about significant changes in his style of governance, which included providing protection to fauna, and even relinquished the royal hunt. He was the first ruler in history[not in citation given] to advocate conservation measures for wildlife and even had rules inscribed in stone edicts. The edicts proclaim that many followed the king's example in giving up the slaughter of animals; one of them proudly states:[81]
Our king killed very few animals.
However, the edicts of Ashoka reflect more the desire of rulers than actual events; the mention of a 100 'panas' (coins) fine for poaching deer in royal hunting preserves shows that rule-breakers did exist. The legal restrictions conflicted with the practices freely exercised by the common people in hunting, felling, fishing and setting fires in forests.[81]
Relations with the Hellenistic world may have started from the very beginning of the Maurya Empire. Plutarch reports that Chandragupta Maurya met with Alexander the Great, probably around Taxila in the northwest:
Chandragupta ultimately occupied Northwestern India, in the territories formerly ruled by the Greeks, where he fought the satraps (described as "Prefects" in Western sources) left in place after Alexander (Justin), among whom may have been Eudemus, ruler in the western Punjab until his departure in 317?BCE or Peithon, son of Agenor, ruler of the Greek colonies along the Indus until his departure for Babylon in 316?BCE.[citation needed]
Seleucus I Nicator, the Macedonian satrap of the Asian portion of Alexander's former empire, conquered and put under his own authority eastern territories as far as Bactria and the Indus (Appian, History of Rome, The Syrian Wars 55), until in 305?BCE he entered into a confrontation with Emperor Chandragupta:
Though no accounts of the conflict remain, it is clear that Seleucus fared poorly against the Indian Emperor as he failed to conquer any territory, and in fact was forced to surrender much that was already his. Regardless, Seleucus and Chandragupta ultimately reached a settlement and through a treaty sealed in 305?BCE, Seleucus, according to Strabo, ceded a number of territories to Chandragupta, including large parts of what is now Afghanistan and parts of Balochistan.[citation needed]
Chandragupta and Seleucus concluded a peace treaty and a marital alliance in 303 BCE. Chandragupta received vast territories, and in a return gesture, gave Seleucus 500 war elephants,[25][86][87][88][89] a military asset which would play a decisive role at the Battle of Ipsus in 301?BCE.[90] In addition to this treaty, Seleucus dispatched an ambassador, Megasthenes, to Chandragupta, and later Deimakos to his son Bindusara, at the Mauryan court at Pataliputra (modern Patna in Bihar state). Later, Ptolemy II Philadelphus, the ruler of Ptolemaic Egypt and contemporary of Ashoka, is also recorded by Pliny the Elder as having sent an ambassador named Dionysius to the Mauryan court.[91][better?source?needed]
Mainstream scholarship asserts that Chandragupta received vast territory west of the Indus, including the Hindu Kush, modern-day Afghanistan, and the Balochistan province of Pakistan.[92][93] Archaeologically, concrete indications of Mauryan rule, such as the inscriptions of the Edicts of Ashoka, are known as far as Kandahar in southern Afghanistan.
The treaty on "Epigamia" implies lawful marriage between Greeks and Indians was recognized at the State level, although it is unclear whether it occurred among dynastic rulers or common people, or both.[citation needed].
Classical sources have also recorded that following their treaty, Chandragupta and Seleucus exchanged presents, such as when Chandragupta sent various aphrodisiacs to Seleucus:[42]
His son Bindusara 'Amitraghata' (Slayer of Enemies) also is recorded in Classical sources as having exchanged presents with Antiochus I:[42]
The Greek population apparently remained in the northwest of the Indian subcontinent under Ashoka's rule. In his Edicts of Ashoka, set in stone, some of them written in Greek, Ashoka relates that the Greek population within his realm was absorbed, integrated, and converted to Buddhism:
Fragments of Edict 13 have been found in Greek, and a full Edict, written in both Greek and Aramaic, has been discovered in Kandahar. It is said to be written in excellent Classical Greek, using sophisticated philosophical terms. In this Edict, Ashoka uses the word Eusebeia ("Piety") as the Greek translation for the ubiquitous "Dharma" of his other Edicts written in Prakrit:[non-primary source needed]
Also, in the Edicts of Ashoka, Ashoka mentions the Hellenistic kings of the period as recipients of his Buddhist proselytism, although no Western historical record of this event remains:
Ashoka also encouraged the development of herbal medicine, for men and animals, in their territories:
The Greeks in India even seem to have played an active role in the propagation of Buddhism, as some of the emissaries of Ashoka, such as Dharmaraksita, are described in Pali sources as leading Greek ("Yona") Buddhist monks, active in Buddhist proselytism (the Mahavamsa, XII[97][non-primary source needed]).
Sophagasenus was an Indian Mauryan ruler of the 3rd century BCE, described in ancient Greek sources, and named Subhagasena or Subhashasena in Prakrit. His name is mentioned in the list of Mauryan princes[citation needed], and also in the list of the Yadava dynasty, as a descendant of Pradyumna. He may have been a grandson of Ashoka, or Kunala, the son of Ashoka. He ruled an area south of the Hindu Kush, possibly in Gandhara. Antiochos III, the Seleucid king, after having made peace with Euthydemus in Bactria, went to India in 206?BCE and is said to have renewed his friendship with the Indian king there:
"He (Antiochus) crossed the Caucasus and descended into India; renewed his friendship with Sophagasenus the king of the Indians; received more elephants, until he had a hundred and fifty altogether; and having once more provisioned his troops, set out again personally with his army: leaving Androsthenes of Cyzicus the duty of taking home the treasure which this king had agreed to hand over to him". Polybius 11.39[non-primary source needed]
According to Vicarasreni of Merutunga, Mauryans rose to power in 312?BC.[98]
cultural period
(Punjab-Sapta Sindhu)
(Kuru-Panchala)
(Brahmin ideology)[a]
Painted Grey Ware culture
(Kshatriya/Shramanic culture)[b]
Northern Black Polished Ware
Rise of Shramana movements
Jainism - Buddhism - jؐvika - Yoga
Early Pandyan Kingdom
Satavahana dynasty
Cheras
46 other small kingdoms in Ancient Thamizhagam
(continued)
(300 BC ÿ 200 AD)
Maha-Meghavahana Dynasty
Early Pandyan Kingdom
Satavahana dynasty
Cheras
46 other small kingdoms in Ancient Thamizhagam
Indo-Scythians
Indo-Parthians
Pandyan Kingdom(Under Kalabhras)
Varman dynasty
Pandyan Kingdom(Under Kalabhras)
Kadamba Dynasty
Western Ganga Dynasty
Pandyan Kingdom(Under Kalabhras)
Vishnukundina
Kabul Shahi
Kalabhra dynasty
Pandyan Kingdom(Under Kalabhras)
Pandyan Kingdom(Revival)
Pallava
Kalachuri
Pandyan Kingdom
Medieval Cholas
Pandyan Kingdom(Under Cholas)
Chera Perumals of Makkotai
Kamboja-Pala dynasty
Medieval Cholas
Pandyan Kingdom(Under Cholas)
Chera Perumals of Makkotai
Rashtrakuta
References
Sources
When did the mausoleum at halicarnassus get destroyed?
from the 12th to the 15th century🚨
The Mausoleum at Halicarnassus or Tomb of Mausolus[a] (Ancient Greek: ϫ翼? ?? ?ϫϫ?; Turkish: Halikarnas Mozolesi) was a tomb built between 353 and 350?BC at Halicarnassus (present Bodrum, Turkey) for Mausolus, a satrap in the ?Achaemenid Empire, and his sister-wife Artemisia II of Caria. The structure was designed by the Greek architects Satyros and Pythius of Priene.[1][2]
The Mausoleum was approximately 45?m (148?ft) in height, and the four sides were adorned with sculptural reliefs, each created by one of four Greek sculptorsLeochares, Bryaxis, Scopas of Paros and Timotheus.[3] The finished structure of the mausoleum was considered to be such an aesthetic triumph that Antipater of Sidon identified it as one of his Seven Wonders of the Ancient World. It was destroyed by successive earthquakes from the 12th to the 15th century,[4][5][6] the last surviving of the six destroyed wonders.
The word mausoleum has now come to be used generically for an above-ground tomb.
In the 4th century BC, Halicarnassus was the capital of a small regional kingdom within the Achaemenid Empire on the western coast of Asia Minor. In 377?BC, the nominal ruler of the region, Hecatomnus of Milas, died and left the control of the kingdom to his son, Mausolus. Hecatomnus, a local satrap under the Persians, took control of several of the neighboring cities and districts. After Artemisia and Mausolus, he had several other daughters and sons: Ada (adoptive mother of Alexander the Great), Idrieus and Pixodarus. Mausolus extended its territory as far as the southwest coast of Anatolia. Artemisia and Mausolus ruled from Halicarnassus over the surrounding territory for 24 years. Mausolus, although descended from local people, spoke Greek and admired the Greek way of life and government. He founded many cities of Greek design along the coast and encouraged Greek democratic traditions.[citation needed]
Mausolus decided to build a new capital, one as safe from capture as it was magnificent to be seen. He chose the city of Halicarnassus. Artemisia and Mausolus spent huge amounts of tax money to embellish the city. They commissioned statues, temples and buildings of gleaming marble. In 353?BC, Mausolus died, leaving Artemisia to rule alone. As the Persian satrap, and as the Hecatomnid dynast, Mausolus had planned for himself an elaborate tomb. When he died the project was continued by his siblings. The tomb became so famous that Mausolus's name is now the eponym for all stately tombs, in the word mausoleum.
Artemisia lived for only two years after the death of her husband. The urns with their ashes were placed in the yet unfinished tomb. As a form of sacrifice ritual the bodies of a large number of dead animals were placed on the stairs leading to the tomb, and then the stairs were filled with stones and rubble, sealing the access. According to the historian Pliny the Elder, the craftsmen decided to stay and finish the work after the death of their patron "considering that it was at once a memorial of his own fame and of the sculptor's art."
It is likely that Mausolos started to plan the tomb before his death, as part of the building works in Halicarnassus, and that when he died Artemisia continued the building project. Artemisia spared no expense in building the tomb. She sent messengers to Greece to find the most talented artists of the time. These included Scopas, the man who had supervised the rebuilding of the Temple of Artemis at Ephesus. The famous sculptors were (in the Vitruvius order): Leochares, Bryaxis, Scopas, and Timotheus, as well as hundreds of other craftsmen.
The tomb was erected on a hill overlooking the city. The whole structure sat in an enclosed courtyard. At the center of the courtyard was a stone platform on which the tomb sat. A stairway flanked by stone lions led to the top of the platform, which bore along its outer walls many statues of gods and goddesses. At each corner, stone warriors mounted on horseback guarded the tomb. At the center of the platform, the marble tomb rose as a square tapering block to one-third of the Mausoleum's 45?m (148?ft) height. This section was covered with bas-reliefs showing action scenes, including the battle of the centaurs with the lapiths and Greeks in combat with the Amazons, a race of warrior women.
On the top of this section of the tomb thirty-six slim columns, ten per side, with each corner sharing one column between two sides; rose for another third of the height. Standing between each pair of columns was a statue. Behind the columns was a solid cella-like block that carried the weight of the tomb's massive roof. The roof, which comprised most of the final third of the height, was pyramidal. Perched on the top was a quadriga: four massive horses pulling a chariot in which rode images of Mausolus and Artemisia.
Modern historians have pointed out that two years would not be enough time to decorate and build such an extravagant building. Therefore, it is believed that construction was begun by Mausolus before his death or continued by the next leaders.[7] The Mausoleum of Halicarnassus resembled a temple and the only way to tell the difference was its slightly higher outer walls. The Mausoleum was in the Greek-dominated area of Halicarnassus, which in 353 was controlled by the Achaemenid Empire. According to the Roman architect Vitruvius, it was built by Satyros and Pytheus who wrote a treatise about it; this treatise is now lost.[7] Pausanias adds that the Romans considered the Mausoleum one of the great wonders of the world and it was for that reason that they called all their magnificent tombs mausolea, after it.[8]
It is unknown exactly when and how the Mausoleum came to ruin: Eustathius, writing in the 12th century on his commentary of the Iliad says "it was and is a wonder". Because of this, Fergusson concluded that the building was ruined, probably by an earthquake, between this period and 1402, when the Knights of St. John of Jerusalem arrived and recorded that it was in ruins.[8] However, Luttrell notes[9] that at that time the local Greek and Turks had no name for ÿ or legends to account for ÿ the colossal ruins, suggesting a destruction at a much earlier period.
Many of the stones from the ruins were used by the knights to fortify their castle at Bodrum; they also recovered bas-reliefs with which they decorated the new building. Much of the marble was burned into lime. In 1846 Lord Stratford de Redcliffe obtained permission to remove these reliefs from the Bodrum.[10]
At the original site, all that remained by the 19th century were the foundations and some broken sculptures. This site was originally indicated by Professor Donaldson and was discovered definitively by Charles Newton, after which an expedition was sent by the British government. The expedition lasted three?years and ended in the sending of the remaining marbles.[11] At some point before or after this, grave robbers broke into and destroyed the underground burial chamber, but in 1972 there was still enough of it remaining to determine a layout of the chambers when they were excavated.[7]
This monument was ranked the seventh wonder of the world by the ancients, not because of its size or strength but because of the beauty of its design and how it was decorated with sculpture or ornaments.[12] The mausoleum was Halicarnassus' principal architectural monument, standing in a dominant position on rising ground above the harbor.[13]
Much of the information we have gathered about the Mausoleum and its structure has come from the Roman polymath Pliny the Elder. He wrote some basic facts about the architecture and some dimensions. The building was rectangular, not square, surrounded by a colonnade of thirty-six columns. There was a pyramidal superstructure receding in twenty four steps to the summit. On top there were 4 horse chariots of marble. The building was accented with both sculptural friezes and free standing figures. "The free standing figures were arranged on 5 or 6 different levels."[7] We are now able to justify that Pliny's knowledge came from a work written by the architect. It is clear that Pliny did not grasp the design of the mausoleum fully which creates problems in recreating the structure. He does state many facts which help the reader recreate pieces of the puzzle. Other writings by Pausanias, Strabo, and Vitruvius also help us to gather more information about the Mausoleum.[14]
According to Pliny, the mausoleum was 19 metres (63?ft) north and south, shorter on other fronts, 125 metres (411?ft) perimeter, and 25?cubits (11.4 metres or 37.5 feet) in height. It was surrounded by 36 columns. They called this part the pteron. Above the pteron there was a pyramid on top with 24 steps and equal in height to the lower part. The height of the building was 43 metres (140?ft).[15] The only other author that gives the dimensions of the Mausoleum is Hyginus a grammarian in the time of Augustus. He describes the monument as built with shining stones, 24 metres (80?ft) high and 410 metres (1,340?ft) in circumference. He likely meant cubits which would match Pliny's dimensions exactly but this text is largely considered corrupt and is of little importance.[14] We learn from Vitruvius that Satyrus and Phytheus wrote a description of their work which Pliny likely read. Pliny likely wrote down these dimensions without thinking about the form of the building.[14]
A number of statues were found slightly larger than life size, either 1.5 metres (5?ft). or 1.60 metres (5.25?ft). in length; these were 20 lion statues. Another important find was the depth on the rock on which the building stood. This rock was excavated to 2.4 or 2.7 metres (8 or 9?ft) deep over an area 33 by 39 metres (107 by 127?ft).[15]
The sculptures on the north were created by Scopas, the ones on the east Bryaxis, on the south Timotheus and on the west Leochares.[14]
The Mausoleum was adorned with many great and beautiful sculptures. Some of these sculptures have been lost or only fragments have been found. Several of the statues' original placements are only known through historical accounts. The great figures of Mausolus and Artemisia stood in the chariot at the top of the pyramid. The detached equestrian groups are placed at the corners of the sub podium.[14] The semi-colossal female heads they may have belonged to the acroteria of the two gables which may have represented the six Carian towns incorporated in Halicarnassus.[16] Work still continues today as groups continue to excavate and research the mausoleum's art.
The Mausoleum overlooked the city of Halicarnassus for many years. It was untouched when the city fell to Alexander the Great in 334?BC and still undamaged after attacks by pirates in 62 and 58?BC. It stood above the city's ruins for sixteen centuries. Then a series of earthquakes shattered the columns and sent the bronze chariot crashing to the ground. By 1404?AD only the very base of the Mausoleum was still recognizable.
The Knights of St John of Rhodes invaded the region and built Bodrum Castle (Castle of Saint Peter). When they decided to fortify it in 1494, they used the stones of the Mausoleum. This is also about when "imaginative reconstructions" of the Mausoleum began to appear.[17] In 1522 rumours of a Turkish invasion caused the Crusaders to strengthen the castle at Halicarnassus (which was by then known as Bodrum) and much of the remaining portions of the tomb were broken up and used in the castle walls. Sections of polished marble from the tomb can still be seen there today. Suleiman the Magnificent conquered the base of the knights on the island of Rhodes, who then relocated first briefly to Sicily and later permanently to Malta, leaving the Castle and Bodrum to the Ottoman Empire.
During the fortification work, a party of knights entered the base of the monument and discovered the room containing a great coffin. In many histories of the Mausoleum one can find the following story of what happened: the party, deciding it was too late to open it that day, returned the next morning to find the tomb, and any treasure it may have contained, plundered. The bodies of Mausolus and Artemisia were missing too. The small museum building next to the site of the Mausoleum tells the story. Research done by archeologists in the 1960s shows that long before the knights came, grave robbers had dug a tunnel under the grave chamber, stealing its contents. Also the museum states that it is most likely that Mausolus and Artemisia were cremated, so only an urn with their ashes was placed in the grave chamber. This explains why no bodies were found.
Before grinding and burning much of the remaining sculpture of the Mausoleum into lime for plaster, the Knights removed several of the best works and mounted them in the Bodrum castle. There they stayed for three centuries.
In the 19th century a British consul obtained several of the statues from Bodrum Castle; these now reside in the British Museum. In 1852 the British Museum sent the archaeologist Charles Thomas Newton to search for more remains of the Mausoleum. He had a difficult job. He didn't know the exact location of the tomb, and the cost of buying up all the small parcels of land in the area to look for it would have been astronomical. Instead Newton studied the accounts of ancient writers like Pliny to obtain the approximate size and location of the memorial, then bought a plot of land in the most likely location. Digging down, Newton explored the surrounding area through tunnels he dug under the surrounding plots. He was able to locate some walls, a staircase, and finally three of the corners of the foundation. With this knowledge, Newton was able to determine which plots of land he needed to buy.
Newton then excavated the site and found sections of the reliefs that decorated the wall of the building and portions of the stepped roof. Also discovered was a broken stone chariot wheel some 2?m (6?ft 7?in) in diameter, which came from the sculpture on the Mausoleum's roof. Finally, he found the statues of Mausolus and Artemisia that had stood at the pinnacle of the building. In October 1857 Newton carried blocks of marble from this site by HMS?Supply and landed them in Malta. These blocks were used for the construction of a new dock in Malta for the Royal Navy. Today this dock is known at Dock No. 1 in Cospicua, but the building blocks are hidden from view, submerged in Dockyard Creek in the Grand Harbour.[18]
From 1966 to 1977, the Mausoleum was thoroughly researched by Prof. Kristian Jeppesen of Aarhus University, Denmark. He has produced a six-volume monograph, The Maussolleion at Halikarnassos.
The beauty of the Mausoleum was not only in the structure itself, but in the decorations and statues that adorned the outside at different levels on the podium and the roof: statues of people, lions, horses, and other animals in varying scales. The four Greek sculptors who carved the statues: Bryaxis, Leochares, Scopas and Timotheus were each responsible for one side. Because the statues were of people and animals, the Mausoleum holds a special place in history, as it was not dedicated to the gods of Ancient Greece.
Today, the massive castle of the Knights Hospitaller (Knights of St. John) still stands in Bodrum, and the polished stone and marble blocks of the Mausoleum can be spotted built into the walls of the structure. At the site of the Mausoleum, only the foundation remains, and a small museum. Some of the surviving sculptures at the British Museum include fragments of statues and many slabs of the frieze showing the battle between the Greeks and the Amazons. There the images of Mausolus and his queen watch over the few broken remains of the beautiful tomb she built for him.
Reconstruction of the Amazonomachy can be seen in the left Background-The British Museum Room 21
Statue usually identified as Artemisia; Reconstruction of the Amazonomachy can be seen in the left Background-The British Museum Room 21
Slab from the Amazonomachy believed to show Herculeas grabbing the Hair of the Amazon Queen Hippolyta
Modern buildings whose designs were based upon or influenced by interpretations of the design of the Mausoleum of Mausolus include PNC Tower in Cincinnati, the Civil Courts Building in St. Louis; the National Newark Building in Newark, New Jersey; Grant's Tomb and 26 Broadway in New York City; Los Angeles City Hall; the Shrine of Remembrance in Melbourne; the spire of St. George's Church, Bloomsbury in London; the Indiana War Memorial (and in turn Salesforce Tower) in Indianapolis,[19][20] the House of the Temple in Washington D.C., the National Diet in Tokyo, and the Soldiers and Sailors Memorial Hall in Pittsburgh.[21]
What is the language in the czech republic?
Czech🚨
Czech (/t??k/; ?e?tina Czech pronunciation: [?t???c?na]), historically also Bohemian[7] (/bo??hi?mi?n, b?-/;[8] lingua Bohemica in Latin), is a West Slavic language of the CzechÿSlovak group.[7] Spoken by over 10 million people, it serves as the official language of the Czech Republic. Czech is closely related to Slovak, to the point of mutual intelligibility to a very high degree.[9] Like other Slavic languages, Czech is a fusional language with a rich system of morphology and relatively flexible word order. Its vocabulary has been extensively influenced by Latin[10] and German.[11]
The CzechÿSlovak group developed within West Slavic in the high medieval period, and the standardization of Czech and Slovak within the CzechÿSlovak dialect continuum emerged in the early modern period. In the later 18th to mid-19th century, the modern written standard became codified in the context of the Czech National Revival. The main vernacular, known as Common Czech, is based on the vernacular of Prague, but is now spoken throughout most of the Czech Republic. The Moravian dialects spoken in the eastern part of the country are also classified as Czech, although some of their eastern variants are closer to Slovak.
Czech has a moderately-sized phoneme inventory, comprising ten monophthongs, three diphthongs and 25 consonants (divided into "hard", "neutral" and "soft" categories). Words may contain complicated consonant clusters or lack vowels altogether. Czech has a raised alveolar trill, which is not known to occur as a phoneme in any other language, represented by the grapheme ?. Czech uses a simple orthography which phonologists have used as a model.
Czech is a member of the West Slavic sub-branch of the Slavic branch of the Indo-European language family. This branch includes Polish, Kashubian, Upper and Lower Sorbian and Slovak. Slovak is the closest language genetic neighbor of Czech, followed by Polish and Silesian.[12]
The West Slavic languages are spoken in Central Europe. Czech is distinguished from other West Slavic languages by a more-restricted distinction between "hard" and "soft" consonants (see Phonology below).[12]
The term "Old Czech" is applied to the period predating the 16th century, with the earliest records of the high medieval period also classified as "early Old Czech", but the term "Medieval Czech" is also used.
Around the 7th century, the Slavic expansion reached Central Europe, settling on the eastern fringes of the Frankish Empire. The West Slavic polity of Great Moravia formed by the 9th century. The Christianization of Bohemia took place during the 9th and 10th centuries. The diversification of the Czech-Slovak group within West Slavic began around that time, marked among other things by its ephemeral use of the voiced velar fricative consonant (/?/)[13] and consistent stress on the first syllable.[14]
The Bohemian (Czech) language is first recorded in writing in glosses and short notes during the 12th to 13th centuries. Literary works written in Czech appear in the early 14th century and administrative documents first appear towards the late 14th century. The first complete Bible translation also dates to this period.[15] Old Czech texts, including poetry and cookbooks, were produced outside the university as well.[16]
Literary activity becomes widespread in the early 15th century in the context of the Bohemian Reformation. Jan Hus contributed significantly to the standardization of Czech orthography, advocated for widespread literacy among Czech commoners (particularly in religion) and made early efforts to model written Czech after the spoken language.[15]
There was no standardization distinguishing between Czech and Slovak prior to the 15th century.[17] In the 16th century, the division between Czech and Slovak becomes apparent, marking the confessional division between Lutheran Protestants in Slovakia using Czech orthography and Catholics, especially Slovak Jesuits, beginning to use a separate Slovak orthography based on the language of the Trnava region.
The publication of the Kralice Bible between 1579 and 1593 (the first complete Czech translation of the Bible from the original languages) became very important for standardization of the Czech language in the following centuries.
In 1615, the Bohemian diet tried to declare Czech to be the only official language of the kingdom. After the Bohemian Revolt (of predominantly Protestant aristocracy) which was defeated by the Habsburgs in 1620, the Protestant intellectuals had to leave the country. This emigration together with other consequences of the Thirty Years' War had a negative impact on the further use of the Czech language. In 1627, Czech and German became official languages of the Kingdom of Bohemia and in the 18th century German became dominant in Bohemia and Moravia, especially among the upper classes.[18]
The modern standard Czech language originates in standardization efforts of the 18th century.[19] By then the language had developed a literary tradition, and since then it has changed little; journals from that period have no substantial differences from modern standard Czech, and contemporary Czechs can understand them with little difficulty.[20] Changes include the morphological shift of to ej and to (although survives for some uses) and the merging of and the former ej.[21] Sometime before the 18th century, the Czech language abandoned a distinction between phonemic /l/ and /?/ which survives in Slovak.[22]
With the beginning of the national revival of the mid-18th century, Czech historians began to emphasize their people's accomplishments from the 15th through the 17th centuries, rebelling against the Counter-Reformation (the Habsburg re-catholization efforts which had denigrated Czech and other non-Latin languages).[23] Czech philologists studied sixteenth-century texts, advocating the return of the language to high culture.[24] This period is known as the Czech National Revival[25] (or Renaissance).[24]
During the national revival, in 1809 linguist and historian Josef Dobrovsky released a German-language grammar of Old Czech entitled Ausfhrliches Lehrgeb?ude der b?hmischen Sprache (Comprehensive Doctrine of the Bohemian Language). Dobrovsky had intended his book to be descriptive, and did not think Czech had a realistic chance of returning as a major language. However, Josef Jungmann and other revivalists used Dobrovsky's book to advocate for a Czech linguistic revival.[25] Changes during this time included spelling reform (notably, in place of the former j and j in place of g), the use of t (rather than ti) to end infinitive verbs and the non-capitalization of nouns (which had been a late borrowing from German).[22] These changes differentiated Czech from Slovak.[26] Modern scholars disagree about whether the conservative revivalists were motivated by nationalism or considered contemporary spoken Czech unsuitable for formal, widespread use.[25]
Adherence to historical patterns was later relaxed and standard Czech adopted a number of features from Common Czech (a widespread, informal register), such as leaving some proper nouns undeclined. This has resulted in a relatively high level of homogeneity among all varieties of the language.[27]
In 2005 and 2007, Czech was spoken by about 10 million residents of the Czech Republic.[18][28] A Eurobarometer survey conducted from January to March 2012 found that the first language of 98 percent of Czech citizens was Czech, the third-highest in the European Union (behind Greece and Hungary).[29]
Czech, the official language of the Czech Republic (a member of the European Union since 2004), is one of the EU's official languages and the 2012 Eurobarometer survey found that Czech was the foreign language most often used in Slovakia.[29] Economist Jonathan van Parys collected data on language knowledge in Europe for the 2012 European Day of Languages. The five countries with the greatest use of Czech were the Czech Republic (98.77 percent), Slovakia (24.86 percent), Portugal (1.93 percent), Poland (0.98 percent) and Germany (0.47 percent).[30]
Czech speakers in Slovakia primarily live in cities. Since it is a recognised minority language in Slovakia, Slovak citizens who speak only Czech may communicate with the government in their language to the extent that Slovak speakers in the Czech Republic may do so.[31]
Immigration of Czechs from Europe to the United States occurred primarily from 1848 to 1914. Czech is a Less Commonly Taught Language in U.S. schools, and is taught at Czech heritage centers. Large communities of Czech Americans live in the states of Texas, Nebraska and Wisconsin.[32] In the 2000 United States Census, Czech was reported as the most-common language spoken at home (besides English) in Valley, Butler and Saunders Counties, Nebraska and Republic County, Kansas. With the exception of Spanish (the non-English language most commonly spoken at home nationwide), Czech was the most-common home language in over a dozen additional counties in Nebraska, Kansas, Texas, North Dakota and Minnesota.[33] As of 2009, 70,500 Americans spoke Czech as their first language (49th place nationwide, behind Turkish and ahead of Swedish).[34]
The modern written standard is directly based on the standardization during the Czech National Revival in the 1830s, significantly influenced by Josef Jungmann's CzechÿGerman dictionary published during 1834ÿ1839. Jungmann used vocabulary of the Bible of Kralice (1579ÿ1613) period and of the language used by his contemporaries. He borrowed words not present in Czech from other Slavic languages or created neologisms.[35]
Standard Czech contains ten basic vowel phonemes, and three more found only in loanwords. They are /a/, /?/, /?/, /o/, and /u/, their long counterparts /a?/, /??/, /i?/, /o?/ and /u?/, and three diphthongs, /ou?/, /au?/ and /?u?/. The latter two diphthongs and the long /o?/ are exclusive to loanwords.[36] Vowels are never reduced to schwa sounds when unstressed.[37] Each word usually has primary stress on its first syllable, except for enclitics (minor, monosyllabic, unstressed syllables). In all words of more than two syllables, every odd-numbered syllable receives secondary stress. Stress is unrelated to vowel length, and the possibility of stressed short vowels and unstressed long vowels can be confusing to students whose native language combines the features (such as most varieties of English).[38]
Voiced consonants with unvoiced counterparts are unvoiced at the end of a word before a pause, and in consonant clusters voicing assimilation occurs, which matches voicing to the following consonant. The unvoiced counterpart of /?/ is /x/.[39]
Czech consonants are categorized as "hard", "neutral" or "soft":
This distinction describes the declension patterns of nouns, which is based on the category of a noun's ending consonant. Hard consonants may not be followed by i or in writing, or soft ones by y or y (except in loanwords such as kilogram).[40] Neutral consonants may take either character. Hard consonants are sometimes known as "strong", and soft ones as "weak".[41]
The phoneme represented by the letter ? (capital ?) is considered unique to Czech.[42] It represents the raised alveolar non-sonorant trill (IPA: [r?]), a sound somewhere between Czech's r and ? (example: ?"?eka" (river)?(help{info)),[42] and is present in Dvo?k. In unvoiced environments, /r?/ is realized as its voiceless allophone [r??].[43]
The consonants /r/ and /l/ can be syllabic, acting as syllable nuclei in place of a vowel. Str? prst skrz krk ("Stick [your] finger through [your] throat") is a well-known Czech tongue twister using only syllabic consonants.[44]
Consonants
Vowels
Slavic grammar is fusional; its nouns, verbs, and adjectives are inflected by phonological processes to modify their meanings and grammatical functions, and the easily separable affixes characteristic of agglutinative languages are limited.[45]
Slavic inflection is complex and pervasive, inflecting for case, gender and number in nouns and tense, aspect, mood, person and subject number and gender in verbs.[46]
Parts of speech include adjectives, adverbs, numbers, interrogative words, prepositions, conjunctions and interjections.[47] Adverbs are primarily formed from adjectives by taking the final y or of the base form and replacing it with e, , or o.[48] Negative statements are formed by adding the affix ne- to the verb of a clause, with one exception: je (he, she or it is) becomes nen.[49]
Because Czech uses grammatical case to convey word function in a sentence (instead of relying on word order, as English does), its word order is flexible. As a pro-drop language, in Czech an intransitive sentence can consist of only a verb; information about its subject is encoded in the verb.[50] Enclitics (primarily auxiliary verbs and pronouns) must appear in the second syntactic slot of a sentence, after the first stressed unit. The first slot must contain a subject and object, a main form of a verb, an adverb or a conjunction (except for the light conjunctions a, "and", i, "and even" or ale, "but").[51]
Czech syntax has a subjectÿverbÿobject sentence structure. In practice, however, word order is flexible and used for topicalization and focus. Although Czech has a periphrastic passive construction (like English), colloquial word-order changes frequently produce the passive voice. For example, to change "Peter killed
Paul" to "Paul was killed by Peter" the order of subject and object is inverted: Petr zabil Pavla ("Peter killed Paul") becomes "Paul, Peter killed" (Pavla zabil Petr). Pavla is in the accusative case, the grammatical object (in this case, the victim) of the verb.[52]
A word at the end of a clause is typically emphasized, unless an upward intonation indicates that the sentence is a question:[53]
In portions of Bohemia (including Prague), questions such as J pes bagetu? without an interrogative word (such as co, "what" or kdo, "who") are intoned in a slow rise from low to high, quickly dropping to low on the last word or phrase.[54]
In modern Czech syntax, adjectives precede nouns,[55] with few exceptions.[56] Relative clauses are introduced by relativizers such as the adjective ktery, analogous to the English relative pronouns "which", "that", "who" and "whom". As with other adjectives, it is declined into the appropriate case (see Declension below) to match its associated noun, person and number. Relative clauses follow the noun they modify, and the following is a glossed example:[57]
English: I want to visit the university that John attends.
In Czech, nouns and adjectives are declined into one of seven grammatical cases. Nouns are inflected to indicate their use in a sentence. A nominativeÿaccusative language, Czech marks subject nouns with nominative case and object nouns with accusative case. The genitive case marks possessive nouns and some types of movement. The remaining cases (instrumental, locative, vocative and dative) indicate semantic relationships, such as secondary objects, movement or position (dative case) and accompaniment (instrumental case). An adjective's case agrees with that of the noun it describes. When Czech children learn their language's declension patterns, the cases are referred to by number:[58]
Some Czech grammatical texts order the cases differently, grouping the nominative and accusative (and the dative and locative) together because those declension patterns are often identical; this order accommodates learners with experience in other inflected languages, such as Latin or Russian. This order is nominative, accusative, genitive, dative, locative, instrumental and vocative.[58]
Some prepositions require the nouns they modify to take a particular case. The cases assigned by each preposition are based on the physical (or metaphorical) direction, or location, conveyed by it. For example, od (from, away from) and z (out of, off) assign the genitive case. Other prepositions take one of several cases, with their meaning dependent on the case; na means "onto" or "for" with the accusative case, but "on" with the locative.[59]
Examples of declension patterns (using prepositions) for a few nouns with adjectives follow. Only one plural example is given, since plural declension patterns are similar across genders.
This is a glossed example of a sentence using several cases:
English: I carried the box into the house with my friend.
Czech distinguishes three gendersmasculine, feminine, and neuterand the masculine gender is subdivided into animate and inanimate. With few exceptions, feminine nouns in the nominative case end in -a, -e, or consonant; neuter nouns in -o, -e, or -, and masculine nouns in a consonant.[60] Adjectives agree in gender and animacy (for masculine nouns in the accusative or genitive singular and the nominative plural) with the nouns they modify.[61] The main effect of gender in Czech is the difference in noun and adjective declension, but other effects include past-tense verb endings: for example, dlal (he did, or made); dlala (she did, or made) and dlalo (it did, or made).[62]
Nouns are also inflected for number, distinguishing between singular and plural. Typical of a Slavic language, Czech cardinal numbers one through four allow the nouns and adjectives they modify to take any case, but numbers over five place these nouns and adjectives in the genitive case when the entire expression is in nominative or accusative case. The Czech koruna is an example of this feature; it is shown here as the subject of a hypothetical sentence, and declined as genitive for numbers five and up.[63]
Numerical words decline for case and, for numbers one and two, for gender. Numbers one through five are shown below as examples, and have some of the most exceptions among Czech numbers. The number one has declension patterns identical to those of the demonstrative pronoun, to.[64][65]
Although Czech's grammatical numbers are singular and plural, several residuals of dual forms remain. Some nouns for paired body parts use a historical dual form to express plural in some cases: ruka (hand)ruce (nominative); noha (leg)nohama (instrumental), nohou (genitive/locative); oko (eye)o?i, and ucho (ear)u?i. While two of these nouns are neuter in their singular forms, all plural forms are considered feminine; their gender is relevant to their associated adjectives and verbs.[66] These forms are plural semantically, used for any non-singular count, as in mezi ?ty?ma o?ima (face to face, lit. among four eyes). The plural number paradigms of these nouns are actually a mixture of historical dual and plural forms. For example, nohy (legs; nominative/accusative) is a standard plural form of this type of noun.[67]
Czech verb conjugation is less complex than noun and adjective declension because it codes for fewer categories. Verbs agree with their subjects in person (first, second or third) and number (singular or plural), and are conjugated for tense (past, present or future). For example, the conjugated verb mluvme (we speak) is in the present tense and first-person plural; it is distinguished from other conjugations of the infinitive mluvit by its ending, -me.[68]
Typical of Slavic languages, Czech marks its verbs for one of two grammatical aspects: perfective and imperfective. Most verbs are part of inflected aspect pairsfor example, koupit (perfective) and kupovat (imperfective). Although the verbs' meaning is similar, in perfective verbs the action is completed and in imperfective verbs it is ongoing. This is distinct from past and present tense,[69] and any Czech verb of either aspect can be conjugated into any of its three tenses.[68] Aspect describes the state of the action at the time specified by the tense.[69]
The verbs of most aspect pairs differ in one of two ways: by prefix or by suffix. In prefix pairs, the perfective verb has an added prefixfor example, the imperfective pst (to write, to be writing) compared with the perfective napsat (to write down, to finish writing). The most common prefixes are na-, o-, po-, s-, u-, vy-, z- and za-.[70] In suffix pairs, a different infinitive ending is added to the perfective stem; for example, the perfective verbs koupit (to buy) and prodat (to sell) have the imperfective forms kupovat and prodvat.[71] Imperfective verbs may undergo further morphology to make other imperfective verbs (iterative and frequentative forms), denoting repeated or regular action. The verb jt (to go) has the iterative form chodit (to go repeatedly) and the frequentative form chodvat (to go regularly).[72]
Many verbs have only one aspect, and verbs describing continual states of beingbyt (to be), chtt (to want), moct (to be able to), le?et (to lie down, to be lying down)have no perfective form. Conversely, verbs describing immediate states of changefor example, othotnt (to become pregnant) and nadchnout se (to become enthusiastic)have no imperfective aspect.[73]
Although Czech's use of present and future tense is largely similar to that of English, the language uses past tense to represent the English present perfect and past perfect; ona b?ela could mean she ran, she has run or she had run.[74]
In some contexts, Czech's perfective present (which differs from the English present perfect) implies future action; in others, it connotes habitual action.[75] As a result, the language has a proper future tense to minimize ambiguity. The future tense does not involve conjugating the verb describing an action to be undertaken in the future; instead, the future form of byt (as shown in the table at left) is placed before the infinitive (for example, budu jst"I will eat").[76]
This conjugation is not followed by byt itself, so future-oriented expressions involving nouns, adjectives, or prepositions (rather than verbs) omit byt. "I will be happy" is translated as Budu ??astny (not Budu byt ??astny).[76]
The infinitive form ends in t (archaically, ti). It is the form found in dictionaries and the form that follows auxiliary verbs (for example, m??u t sly?et"I can hear you").[77] Czech verbs have three grammatical moods: indicative, imperative and conditional.[78] The imperative mood adds specific endings for each of three person (or number) categories: -?/-i/-ej for second-person singular, -te/-ete/-ejte for second-person plural and -me/-eme/-ejme for first-person plural.[79] The conditional mood is formed with a particle after the past-tense verb. This mood indicates possible events, expressed in English as "I would" or "I wish".[80]
Most Czech verbs fall into one of five classes, which determine their conjugation patterns. The future tense of byt would be classified as a Class I verb because of its endings. Examples of the present tense of each class and some common irregular verbs follow in the tables below:[81]
Czech has one of the most phonemic orthographies of all European languages. Its thirty-one graphemes represent thirty sounds (in most dialects, i and y have the same sound), and it contains only one digraph: ch, which follows h in the alphabet.[82] As a result, some of its characters have been used by phonologists to denote corresponding sounds in other languages. The characters q, w and x appear only in foreign words.[83] The h?ek () is used with certain letters to form new characters: ?, ?, and ?, as well as , , ?, ?, and ? (the latter five uncommon outside Czech). The last two letters are sometimes written with a comma above (?, an abbreviated h?ek) because of their height.[84] The character ܇ exists only in loanwords and onomatopoeia.[85]
Unlike most European languages, Czech distinguishes vowel length; long vowels are indicated by an acute accent or, occasionally with ?, a ring. Long u is usually written ~ at the beginning of a word or morpheme (~roda, ne~rodny) and ? elsewhere,[86] except for loanwords (sk~tr) or onomatopoeia (b~).[87] Long vowels and are not considered separate letters in the alphabetical order.[88]
Czech typographical features not associated with phonetics generally resemble those of most Latin European languages, including English. Proper nouns, honorifics, and the first letters of quotations are capitalized, and punctuation is typical of other Latin European languages. Writing of ordinal numerals is similar to most European languages. The Czech language uses a decimal comma instead of a decimal point. When writing a long number, spaces between every three numbers (e.g. between hundreds and thousands) may be used for better orientation in handwritten texts, but not in decimal places, like in English. The number 1,234,567.8910 may be written as 1234567,8910 or 1 234 567,8910. Ordinal numbers (1st) use a point as in German (1.). In proper noun phrases (except personal names), only the first word is capitalized (Pra?sky hrad, Prague Castle).[89][90]
The main vernacular of Bohemia is "Common Czech", based on the dialect of the Prague region.
Other Bohemian dialects have become marginalized, while Moravian dialects remain more widespread, with a political movement for Moravian linguistic revival active since the 1990s.
The main Czech vernacular, spoken primarily in and around Prague but also throughout the country, is known as Common Czech (obecn ?e?tina). This is an academic distinction; most Czechs are unaware of the term or associate it with vernacular (or incorrect) Czech.[91] Compared to standard Czech, Common Czech is characterized by simpler inflection patterns and differences in sound distribution.[92]
Common Czech has become ubiquitous in most parts of the Czech Republic since the later 20th century. It is usually defined as an interdialect used in common speech in Bohemia and western parts of Moravia (by about two thirds of all inhabitants of the Czech Republic). Common Czech is not codified, but some of its elements have become adopted in the written standard.
Since the second half of the 20th century, Common Czech elements have also been spreading to regions previously unaffected, as a consequence of media influence.
Standard Czech is still the norm for politicians, businesspeople and other Czechs in formal situations, but Common Czech is gaining ground in journalism and the mass media.[92]
Common Czech is characterized by quite regular differences from the standard morphology and phonology. These variations are more or less common among all Common Czech speakers:[93]
Example of declension (with the comparison with the standard Czech):
mlady ?lovk ÿ young man/person, mlad lid ÿ young people, mlady stt ÿ young state, mlad ?ena ÿ young woman, mlad zv?e ÿ young animal
Apart from the Common Czech vernacular, there remain a variety of other Bohemian dialects, mostly in marginal rural areas. Dialect use began to weaken in the second half of the 20th century, and by the early 1990s regional dialect use was stigmatized, associated with the shrinking lower class and used in literature or other media for comedic effect. Increased travel and media availability to dialect-speaking populations has encouraged them to shift to (or add to their own dialect) standard Czech.[95]
The Czech Statistical Office in 2003 recognized the following Bohemian dialects:[96]
Bohemian dialects use a slightly different set of vowel phonemes to standard Czech. The phoneme /??/ is peripheral and is replaced by /i?/, and a second native diphthong /e??/ occurs, usually in places where standard Czech has /i?/.[97]
The Czech dialects spoken in Moravia and Silesia are known as Moravian (morav?tina). In the Austro-Hungarian Empire, "Bohemian-Moravian-Slovak" was a language citizens could register as speaking (with German, Polish and several others).[98] Of the Czech dialects, only Moravian is distinguished in nationwide surveys by the Czech Statistical Office. As of 2011, 62,908 Czech citizens spoke Moravian as their first language and 45,561 were diglossal (speaking Moravian and standard Czech as first languages).[99]
Beginning in the sixteenth century, some varieties of Czech resembled Slovak;[17] the southeastern Moravian dialects, in particular, are sometimes considered dialects of Slovak rather than Czech. These dialects form a continuum between the Czech and Slovak languages,[100] using the same declension patterns for nouns and pronouns and the same verb conjugations as Slovak.[101]
The Czech Statistical Office in 2003 recognized the following Moravian dialects:[96]
In a 1964 textbook on Czech dialectology, B?etislav Koudela used the following sentence to highlight phonetic differences between dialects:[102]
Czech and Slovak have been considered mutually intelligible; speakers of either language can communicate with greater ease than those of any other pair of West Slavic languages. Since the 1993 dissolution of Czechoslovakia, mutual intelligibility has declined for younger speakers, probably because Czech speakers now experience less exposure to Slovak and vice versa.[103]
In phonetic differences, Czech is characterized by a glottal stop before initial vowels and Slovak by its less-frequent use of long vowels than Czech;[104] however, Slovak has long forms of the consonants r and l when they function as vowels.[105] Phonemic differences between the two languages are generally consistent, typical of two dialects of a language. Grammatically, although Czech (unlike Slovak) has a fully productive vocative case,[104] both languages share a common syntax.[17]
One study showed that Czech and Slovak lexicons differed by 80 percent, but this high percentage was found to stem primarily from differing orthographies and slight inconsistencies in morphological formation;[106] Slovak morphology is more regular (when changing from the nominative to the locative case, Praha becomes Praze in Czech and Prahe in Slovak). The two lexicons are generally considered similar, with most differences found in colloquial vocabulary and some scientific terminology. Slovak has slightly more borrowed words than Czech.[17]
The similarities between Czech and Slovak led to the languages being considered a single language by a group of 19th-century scholars who called themselves "Czechoslavs" (?echoslovan), believing that the peoples were connected in a way which excluded German Bohemians and (to a lesser extent) Hungarians and other Slavs.[107] During the First Czechoslovak Republic (1918ÿ1938), although "Czechoslovak" was designated as the republic's official language, both Czech and Slovak written standards were used. Standard written Slovak was partially modeled on literary Czech, and Czech was preferred for some official functions in the Slovak half of the republic. Czech influence on Slovak was protested by Slovak scholars, and when Slovakia broke off from Czechoslovakia in 1938 as the Slovak State (which then aligned with Nazi Germany in World War II), literary Slovak was deliberately distanced from Czech. When the Axis powers lost the war and Czechoslovakia reformed, Slovak developed somewhat on its own (with Czech influence); during the Prague Spring of 1968, Slovak gained independence from (and equality with) Czech,[17] due to the transformation of Czechoslovakia from a unitary state to a federation. Since the dissolution of Czechoslovakia in 1993, "Czechoslovak" has referred to improvised pidgins of the languages which have arisen from the decrease in mutual intelligibility.[108]
Czech vocabulary derives primarily from Slavic, Baltic and other Indo-European roots. Although most verbs have Balto-Slavic origins, pronouns, prepositions and some verbs have wider, Indo-European roots.[109] Some loanwords have been restructured by folk etymology to resemble native Czech words (h?bitov, "graveyard" and listina, "list").[110]
Most Czech loanwords originated in one of two time periods. Earlier loanwords, primarily from German,[111] Greek and Latin,[112] arrived before the Czech National Revival. More recent loanwords derive primarily from English and French,[111] and also from Hebrew, Arabic and Persian. Many Russian loanwords, principally animal names and naval terms, also exist in Czech.[113]
Although older German loanwords were colloquial, recent borrowings from other languages are associated with high culture.[111] During the nineteenth century, words with Greek and Latin roots were rejected in favor of those based on older Czech words and common Slavic roots; "music" is muzyka in Polish and ֿ (muzyka) in Russian, but in Czech it is hudba.[112] Some Czech words have been borrowed as loanwords into English and other languagesfor example, robot (from robota, "labor")[114] and polka (from polka, "Polish woman" or from "p?lka" "half").[115]
According to Article 1 of the United Nations Universal Declaration of Human Rights:
Czech: V?ichni lid se rod svobodn a sob rovn co do d?stojnosti a prv. Jsou nadni rozumem a svdomm a maj spolu jednat v duchu bratrstv.[116]
English: "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood."[117]
Who wrote erewhom what kind of novel is it?
Samuel Butler🚨Erewhon: or, Over the Range /?.r?.hw?n/[1] is a novel by Samuel Butler which was first published anonymously in 1872.[2] The title is also the name of a country, supposedly discovered by the protagonist. In the novel, it is not revealed where Erewhon is, but it is clear that it is a fictional country. Butler meant the title to be understood as the word "nowhere" backwards even though the letters "h" and "w" are transposed. The book is a satire on Victorian society.[3]
The first few chapters of the novel dealing with the discovery of Erewhon are in fact based on Butler's own experiences in New Zealand where, as a young man, he worked as a sheep farmer on Mesopotamia Station for about four years (1860ÿ64), and explored parts of the interior of the South Island and which he wrote about in his A First Year in Canterbury Settlement (1863).
The greater part of the book consists of a description of Erewhon. The nature of this nation is intended to be ambiguous. At first glance, Erewhon appears to be a Utopia, yet it soon becomes clear that this is far from the case. Yet for all the failings of Erewhon, it is also clearly not a dystopia, such as that depicted in George Orwell's Nineteen Eighty-Four. As a satirical utopia, Erewhon has sometimes been compared to Gulliver's Travels (1726), a classic novel by Jonathan Swift; the image of Utopia in this latter case also bears strong parallels with the self-view of the British Empire at the time. It can also be compared to the William Morris novel, News from Nowhere.
Erewhon satirises various aspects of Victorian society, including criminal punishment, religion and anthropocentrism. For example, according to Erewhonian law, offenders are treated as if they were ill, whereas ill people are looked upon as criminals. Another feature of Erewhon is the absence of machines; this is due to the widely shared perception by the Erewhonians that they are potentially dangerous. This last aspect of Erewhon reveals the influence of Charles Darwin's evolution theory; Butler had read On the Origin of Species soon after it was published in 1859.
Butler developed the three chapters of Erewhon that make up "The Book of the Machines" from a number of articles that he had contributed to The Press, which had just begun publication in Christchurch, New Zealand, beginning with "Darwin among the Machines" (1863). Butler was the first to write about the possibility that machines might develop consciousness by Darwinian Selection.[4] Many dismissed this as a joke; but, in his preface to the second edition, Butler wrote, "I regret that reviewers have in some cases been inclined to treat the chapters on Machines as an attempt to reduce Mr. Darwin's theory to an absurdity. Nothing could be further from my intention, and few things would be more distasteful to me than any attempt to laugh at Mr. Darwin."
After its first release, this book sold far better than any of Butler's other works,[clarification needed] perhaps because the British public assumed that the anonymous author was some better-known figure[citation needed] (the favourite being Lord Lytton, who had published The Coming Race two years previously). In a 1945 broadcast, George Orwell praised the book and said that when Butler wrote Erewhon it needed "imagination of a very high order to see that machinery could be dangerous as well as useful." He recommended the novel, though not its sequel, Erewhon Revisited.[5]
The French philosopher Gilles Deleuze used ideas from Butler's book at various points in the development of his philosophy of difference. In Difference and Repetition (1968), Deleuze refers to what he calls "Ideas" as "Erewhon." "Ideas are not concepts," he explains, but rather "a form of eternally positive differential multiplicity, distinguished from the identity of concepts."[6] "Erewhon" refers to the "nomadic distributions" that pertain to simulacra, which "are not universals like the categories, nor are they the hic et nunc or nowhere, the diversity to which categories apply in representation."[7] "Erewhon," in this reading, is "not only a disguised no-where but a rearranged now-here."[8]
In his collaboration with Flix Guattari, Anti-Oedipus (1972), Deleuze draws on Butler's "The Book of the Machines" to "go beyond" the "usual polemic between vitalism and mechanism" as it relates to their concept of "desiring-machines":[9]
For one thing, Butler is not content to say that machines extend the organism, but asserts that they are really limbs and organs lying on the body without organs of a society, which men will appropriate according to their power and their wealth, and whose poverty deprives them as if they were mutilated organisms. For another, he is not content to say that organisms are machines, but asserts that they contain such an abundance of parts that they must be compared to very different parts of distinct machines, each relating to the others, engendered in combination with the others ... He shatters the vitalist argument by calling in question the specific or personal unity of the organism, and the mechanist argument even more decisively, by calling in question the structural unity of the machine.
In 1994, a group of ex-Yugoslavian writers in Amsterdam, who had established the PEN centre of Yugoslav Writers in Exile, published a single issue of a literary journal Erewhon.[10]
New Zealand sound art organisation, the Audio Foundation, published in 2012 an anthology edited by Bruce Russell named Erewhon Calling after Butler's book.[11]
In 2014, New Zealand artist Gavin Hipkins released his first feature film, titled Erewhon and based on Butler's book. It premiered at the New Zealand International Film Festival and the Edinburgh Art Festival.[12]
In "Smile", the second episode of the 2017 season of Doctor Who, the Doctor and Bill explore a spaceship named Erehwon. Despite the slightly different spelling, the episode writer Frank Cottrell-Boyce confirmed[13] that this was a reference to Butler's novel.
What town is the tv show doc martin filmed in?
Port Isaac, Cornwall, England🚨Doc Martin is a British television medical comedy drama series starring Martin Clunes in the title role. It was created by Dominic Minghella[1] after the character of Dr Martin Bamford in the 2000 comedy film Saving Grace.[2] The show is set in the fictional seaside village of Portwenn and filmed on location in the village of Port Isaac, Cornwall, England, with most interior scenes shot in a converted local barn.
Seven series aired between 2004 and 2015, and a feature-length special aired on Christmas Day 2006. The eighth and most recent series began airing on ITV on 20 September 2017, and will stream in the United States and Canada on Acorn TV. An American TV remake of the series is also being planned.[3] While it was initially reported that the series would end after Series 9 in 2018, Martin Clunes clarified that it had only been commissioned as far as the next year, thereby not ruling out future plans by the broadcaster.[4]
Dr Martin Ellingham (Martin Clunes), a brilliant and successful vascular surgeon at Imperial College London, develops haemophobia (a fear of blood), forcing him to stop practising surgery. He obtains a post as the sole general practitioner (GP) in the sleepy Cornish village of Portwenn, where he had spent childhood holidays with his Aunt Joan (Stephanie Cole), who owns a local farm. Upon arriving in Portwenn?ÿ where, to his frustration, the locals address him as "Doc Martin"?ÿ he finds the surgery (medical clinic) in chaos and inherits an incompetent receptionist, Elaine Denham (Lucy Punch). In Series 2ÿ4, she is replaced by Pauline Lamb (Katherine Parkinson), a new receptionist, and later also a phlebotomist. In Series 5, Morwenna Newcross (Jessica Ransom) takes up the post.
The show revolves around Ellingham's interactions with the local Cornish villagers. Despite his medical excellence, Ellingham is grouchy, pugnacious, and lacks social skills. His direct, emotionless manner offends many of the villagers, made worse by his invariably unpleasant responses to their ignorant, often foolish, comments. They perceive him to be hot-tempered and lacking in a bedside manner, whereas he feels he is performing his duties in a professional and by-the-book manner, not wasting time chatting. Ellingham is very deadpan and dresses formally in a business suit and tie, regardless of the weather or the occasion, and he never takes off his jacket, even when delivering babies. He does not smoke and has no hesitation in pointing out the risks of unhealthy behaviours, both in private and in public gatherings.
The villagers eventually discover his fear of blood, and the frequent and debilitating bouts of nausea and vomiting it causes. In spite of this handicap, Ellingham proves to be an expert diagnostician and responds effectively to various emergencies in his medical practice; thus, he gradually gains grudging respect from his neighbours. Ellingham's aunt, Joan Norton (Stephanie Cole), provides emotional support in the face of the controversy his impatient manner causes among the villagers. When she dies after a heart attack, her sister Ruth (Eileen Atkins), a retired psychiatrist, comes to Portwenn to take care of her affairs, and eventually decides to use the village as a permanent retreat, offering Martin the support Joan had provided.
Ellingham finds it difficult to express his developing romantic feelings towards primary school teacher Louisa Glasson (Caroline Catz). He often spoils rare tender moments with, for example, a comment about an unpleasant medical condition or by requesting a stool sample. Martin eventually proposes to Louisa, but on the day of their wedding they suddenly break their engagement. Louisa leaves for a job in London; returning after six months, visibly pregnant with Martin's child. When the child is born, the couple renew their relationship. Following much indecision, Martin resolves to remain in Portwenn and marries Louisa, but continued arguments relating to his insensitive nature lead to their becoming estranged again. In Series 7, Louisa lives in Martin's surgery with their baby James Henry, while Martin boards in the village and sees a therapist for his inability to form and maintain relationships.
Martin Clunes originally played a character called "Dr Martin Bamford" in the 2000 film Saving Grace and its two made-for-TV prequels, Doc Martin and Doc Martin and the Legend of the Cloutie, which were made by British Sky Broadcasting (BSkyB). The prequels show Bamford as a successful obstetrician, rather than a surgeon, who finds out that his wife has been carrying on extramarital affairs behind his back. After confronting her with his discovery, he escapes London and heads for Port Isaac, a small coastal town in Cornwall which he remembers fondly from his youth. Shortly after he arrives, he is involved in the mystery of the "Jellymaker" and, following the departure of the village's resident GP, decides to stay and fill the vacancy. In these three films the village is not known as Portwenn.
The Martin Bamford character is friendly and laid-back, seeming to enjoy his retreat from the career pressures and conflicts he left behind in London. He drinks and smokes carelessly, including a mild illegal drug, and has no problem getting his hands and clothes dirty by temporarily working as a lobster and crab fisherman aboard a local boat.
The original deal had been to produce two television films per year for three years, but Sky Pictures folded after the first two episodes were made, so Clunes' company tried to sell the franchise to ITV. The new network felt that the doctor character should be portrayed as a "townie", a fish out of water who is uncomfortable in the countryside. They also wanted something darker, so Clunes suggested that the doctor be curmudgeonly, socially inept, and formal. The new doctor's surname was changed to Ellingham, an anagram of the last name of the new writer, Dominic Minghella, who was brought in to rework the doctor's background and create a new cast of supporting characters.
Along with Clunes, the only actors to appear in both versions of Doc Martin are Tristan Sturrock and Tony Maudsley.
Eight series totaling 62 episodes aired on ITV in the UK between 2004 and 2017. Episodes are just under 50 minutes long, except for the 2006 TV film which is 92 minutes. In the US, American Public Television provided the 2006 TV film as a two-part episode, with the second episode airing the week after the first.
In the UK, Doc Martin has been a ratings success for ITV with the third series achieving ITV's best midweek drama performance in the 9pm Monday slot since December 2004.[5] The final episode of the third series was watched by 10.37 million viewers, which is the programme's highest-ever viewing figure for a single episode.[6]
In 2009, Doc Martin was moved to a 9pm Sunday time slot for the broadcast of Series 4. That change meant that it followed-on from ITV's The X Factor programme. Series 4 ratings were adversely affected by STV not screening the majority of ITV drama productions in Scotland. The final episode of Series 4 had ratings of 10.29 million viewers.[7] STV went back on its decision not to screen ITV drama in Scotland. Series 4 of Doc Martin was broadcast on Sunday afternoons in August 2011.
In 2004, Doc Martin won the British Comedy Award for Best TV Comedy Drama, having also been nominated as Best New TV Comedy. In the same year, Martin Clunes won the Best TV Comedy Actor award, primarily for his portrayal of Doc Martin.
Notro Films produced a Spanish version under the title Doctor Mateo for Antena 3 Televisi܇n. It aired in 2009 and was shot in Lastres, Asturias, called the fictional village of San Martn del Sella.[citation needed] French television producers Ego Productions, in cooperation with TF1, have produced a French version of the series starring Thierry Lhermitte as Dr Martin Le Foll, with the series based in the fictional Breton town of Port-Garrec.[8][9]
In Germany, Doktor Martin, an adaptation of the original series, airs on ZDF with Axel Milberg as Doktor Martin Helling,[10] a surgeon from Berlin. The counterpart of Portwenn was the real village of Neuharlingersiel in East Frisia. In Greece, Kliniki Periptosi, an adaptation of the original series, was aired in November 2011 on Mega Channel with Yannis Bezos as Markos Staikos, a surgeon from New York.[citation needed]
In the Netherlands Dokter Tinus based on the original series began airing in late August 2012 on SBS6, with the main role being played by actor Thom Hoffman. The series was shot in Woudrichem.[citation needed] A Russian version is mentioned in the Series 5 DVD bonus material.[full citation needed] In 2014, Czech Television began filming their own TV series starring Miroslav Donutil, which is heavily inspired by the original British series.[11] The series started to air from 4 September 2015. The Czech version is set in the Beskydy mountains, which is a picturesque area in the east of the Czech Republic, like Portwenn, a long way from the capital, Prague, and dependent on the tourist industry.[12]
As of January?2016,[update] an American remake of the show is being planned, led by Marta Kauffman, co-creator of the successful TV show Friends.[3]
Series 1, 2 and 3 and "On the Edge" were released separately in Region 1 and 2 and in the "complete Series 1 to 3" box set. Series 3 was released on 2 February 2010 and Series 4 was released in Region 1 and 2 on 6 July 2010. Series 5 was released in Region 1 on 5 June 2012 and Region 2 on 5 March 2012. A complete boxset of Series 1-5 is also available in Region 2. Series 6 of Doc Martin was released in Region 1 in December 2013 and in the UK (Region 2) on 24 March 2014. Series 7 of Doc Martin was released on DVD/Blu-ray in Region 1 on December 8, 2015 and in the UK (Region 2) on 16 November 2015.
In Region 4, Series 1, 2, 4, and "On the Edge" were released separately and in a nine-disc boxset entitled "Doc Martin: Comedy Cure", as well as an earlier seven-disc boxset not including Series 4. The two Sky Pictures telefilms were individually released in Region 4 (as 'Doc Martin: volume 1' and 'Doc Martin: volume 2, the Legend of the Cloutie') on the Magna Pacific label, but are now out-of-print. Series 1-8 are streaming on Acorn TV in the U.S. and Canada. The show is available on Netflix. Series 1-6 are currently available on Amazon Prime Video.
In 2013, it was announced that two novels were to be released to coincide with the sixth series.[13]
What's the population in new orleans louisiana?
343,829 as of the 2010 U.S. Census🚨
When did the original daylight savings time start?
April 30, 1916🚨Daylight saving time (abbreviated DST), sometimes referred to as daylight savings time in U.S., Canadian, and Australian speech,[1][2] and known as British Summer Time (BST) in the UK and just summer time in some countries, is the practice of advancing clocks during summer months so that evening daylight lasts longer, while sacrificing normal sunrise times. Typically, regions that use daylight saving time adjust clocks forward one hour close to the start of spring and adjust them backward in the autumn to standard time.[3]
George Hudson proposed the idea of daylight saving in 1895.[4] The German Empire and Austria-Hungary organized the first nationwide implementation, starting on April 30, 1916. Many countries have used it at various times since then, particularly since the energy crisis of the 1970s.
DST is generally not observed near the equator, where sunrise times do not vary enough to justify it. Some countries observe it only in some regions; for example, southern Brazil observes it while equatorial Brazil does not.[5] Only a minority of the world's population uses DST, because Asia and Africa generally do not observe it.
DST clock shifts sometimes complicate timekeeping and can disrupt travel, billing, record keeping, medical devices, heavy equipment,[6] and sleep patterns.[7] Computer software often adjusts clocks automatically, but policy changes by various jurisdictions of DST dates and timings may be confusing.[8]
Industrialized societies generally follow a clock-based schedule for daily activities that do not change throughout the course of the year. The time of day that individuals begin and end work or school, and the coordination of mass transit, for example, usually remain constant year-round. In contrast, an agrarian society's daily routines for work and personal conduct are more likely governed by the length of daylight hours[9][10] and by solar time, which change seasonally because of the Earth's axial tilt. North and south of the tropics daylight lasts longer in summer and shorter in winter, with the effect becoming greater the further one moves away from the tropics.
By synchronously resetting all clocks in a region to one hour ahead of standard time, individuals who follow such a year-round schedule will wake an hour earlier than they would have otherwise; they will begin and complete daily work routines an hour earlier, and they will have available to them an extra hour of daylight after their workday activities.[11][12] However, they will have one less hour of daylight at the start of each day, making the policy less practical during winter.[13][14]
While the times of sunrise and sunset change at roughly equal rates as the seasons change, proponents of Daylight Saving Time argue that most people prefer a greater increase in daylight hours after the typical "nine to five" workday.[15][16] Supporters have also argued that DST decreases energy consumption by reducing the need for lighting and heating, but the actual effect on overall energy use is heavily disputed.
The manipulation of time at higher latitudes (for example Iceland, Nunavut or Alaska) has little impact on daily life, because the length of day and night changes more extremely throughout the seasons (in comparison to other latitudes), and thus sunrise and sunset times are significantly out of phase with standard working hours regardless of manipulations of the clock.[17] DST is also of little use for locations near the equator, because these regions see only a small variation in daylight in the course of the year.[18] The effect also varies according to how far east or west the location is within its time zone, with locations farther east inside the time zone benefiting more from DST than locations farther west in the same time zone.[19]
Although they did not fix their schedules to the clock in the modern sense, ancient civilizations adjusted daily schedules to the sun more flexibly than DST does, often dividing daylight into twelve hours regardless of daytime, so that (for example) each daylight hour became progressively longer during spring and shorter during autumn.[20] For example, the Romans kept time with water clocks that had different scales for different months of the year: at Rome's latitude the third hour from sunrise, hora tertia, started by modern standards at 09:02 solar time and lasted 44 minutes at the winter solstice, but at the summer solstice it started at 06:58 and lasted 75 minutes.[21] After ancient times, equal-length civil hours eventually[when?] supplanted unequal ones, so civil time no longer varies by season. Unequal hours are still used in a few traditional settings, such as some monasteries of Mount Athos[22] and all Jewish ceremonies.[23]
During his time as an American envoy to France (1776-1785), Benjamin Franklin, publisher of the old English proverb "Early to bed, and early to rise, makes a man healthy, wealthy and wise",[24][25] anonymously published a letter suggesting that Parisians economize on candles by rising earlier to use morning sunlight.[26] This 1784 satire proposed taxing window shutters, rationing candles, and waking the public by ringing church bells and firing cannons at sunrise.[27] Despite common misconception, Franklin did not actually propose DST; 18th-century Europe did not even keep precise schedules. However, this soon changed as rail transport and communication networks came to require a standardization of time unknown in Franklin's day.[28]
The New Zealand entomologist George Hudson first proposed modern DST. Hudson's shift-work job gave him leisure time to collect insects and led him to value after-hours daylight.[4] In 1895 he presented a paper to the Wellington Philosophical Society proposing a two-hour daylight-saving shift,[11] and after considerable interest was expressed in Christchurch, he followed up with an 1898 paper.[29] Many publications credit DST proposal to the prominent English builder and outdoorsman William Willett,[30] who independently conceived DST in 1905 during a pre-breakfast ride, when he observed with dismay how many Londoners slept through a large part of a summer day.[16] An avid golfer, Willett also disliked cutting short his round at dusk.[31] His solution was to advance the clock during the summer months, a proposal he published two years later.[32] The Liberal Party member of parliament (MP) Robert Pearce took up Willett's proposal, introducing the first Daylight Saving Bill to the House of Commons on February 12, 1908.[33] A select committee was set up to examine the issue, but Pearce's bill did not become law, and several other bills failed in the following years. Willett lobbied for the proposal in the UK until his death in 1915.
William Sword Frost, mayor of Orillia, Ontario, introduced daylight saving time in the municipality during his tenure from 1911 to 1912.[34]
Starting on April 30, 1916, the German Empire and its World War I ally Austria-Hungary introduced DST (German: Sommerzeit) as a way to conserve coal during wartime. Britain, most of its allies, and many European neutrals soon followed suit. Russia and a few other countries waited until the next year, and the United States adopted daylight saving in 1918.
Broadly speaking, most jurisdictions abandoned daylight saving time in the years after the war ended in 1918 (with some notable exceptions including Canada, the UK, France, and Ireland). However, many different places adopted it for periods of time during the following decades and it became common during World War II. It became widely adopted, particularly in North America and Europe, starting in the 1970s as a result of the 1970s energy crisis.
Since then, the world has seen many enactments, adjustments, and repeals.[35] For specific details, see Daylight saving time by country.
In the United States, a one-hour time shift occurs at 02:00 local time. In spring the clock jumps forward from the last instant of 01:59 standard time to 03:00 DST and that day has 23?hours. In autumn the clock jumps backward from the last instant of 01:59 DST to 01:00 standard time, repeating that hour, and that day has 25?hours.[36] A digital display of local time does not read 02:00 exactly at the shift to summer time, but instead jumps from 01:59:59.9 forward to 03:00:00.0.
Clock shifts are usually scheduled near a weekend midnight to lessen disruption to weekday schedules. A one-hour shift is customary.[37] Twenty-minute and two-hour shifts have been used in the past.
Coordination strategies differ when adjacent time zones shift clocks. The European Union shifts all zones at the same instant, at 01:00 Greenwich Mean Time[38] or 02:00 CET or 03:00 EET. The result of this procedure is that Eastern European Time is always one hour ahead of Central European Time, at the cost of the shift happening at different local times.[39] In contrast most of North America shifts at 02:00 local time, so its zones do not shift at the same instant; for example, Mountain Time is temporarily (for one hour) zero hours ahead of Pacific Time, instead of one hour ahead, in the autumn and two hours, instead of one, ahead of Pacific Time in the spring. In the past, Australian districts went even further and did not always agree on start and end dates; for example, in 2008 most DST-observing areas shifted clocks forward on October 5 but Western Australia shifted on October 26.[40] In some cases only part of a country shifts; for example, in the U.S., Hawaii and most of Arizona do not observe DST.[41][42]
Start and end dates vary with location and year. Since 1996, European Summer Time has been observed from the last Sunday in March to the last Sunday in October; previously the rules were not uniform across the European Union.[39] Starting in 2007, most of the United States and Canada observe DST from the second Sunday in March to the first Sunday in November, almost two-thirds of the year.[43] The 2007 U.S. change was part of the Energy Policy Act of 2005; previously, from 1987 through 2006, the start and end dates were the first Sunday in April and the last Sunday in October, and Congress retains the right to go back to the previous dates now that an energy-consumption study has been done.[44] Proponents for permanently retaining November as the month for ending DST point to Halloween as a reason to delay the changeto provide extra daylight on October 31.
Beginning and ending dates are roughly the reverse in the southern hemisphere. For example, mainland Chile observed DST from the second Saturday in October to the second Saturday in March, with transitions at 24:00 local time.[45]
As a result, time difference between two regions varies along the year because of DST. Central European Time is usually six hours later than North American Eastern Time, except a few weeks in March and October/November. Likewise, the United Kingdom and mainland Chile could be five hours apart during the northern summer, three hours during the southern summer, and four hours a few weeks per year because of mismatch of changing dates.
Daylight saving has caused controversy since it began.[3] Winston Churchill argued that it enlarges "the opportunities for the pursuit of health and happiness among the millions of people who live in this country"[46] and pundits have dubbed it "Daylight Slaving Time".[47] Historically, retailing, sports, and tourism interests have favored daylight saving, while agricultural and evening entertainment interests have opposed it, and its initial adoption had been prompted by energy crises and war.[48]
The fate of Willett's 1907 proposal illustrates several political issues involved. The proposal attracted many supporters, including Arthur Balfour, Churchill, David Lloyd George, Ramsay MacDonald, Edward VII (who used half-hour DST at Sandringham or "Sandringham time"), the managing director of Harrods, and the manager of the National Bank. However, the opposition was stronger: it included Prime Minister H. H. Asquith, Christie (the Astronomer Royal), George Darwin, Napier Shaw (director of the Meteorological Office), many agricultural organizations, and theatre owners. After many hearings the proposal was narrowly defeated in a parliamentary committee vote in 1909. Willett's allies introduced similar bills every year from 1911 through 1914, to no avail.[49] The U.S. was even more skeptical: Andrew Peters introduced a DST bill to the United States House of Representatives in May 1909, but it soon died in committee.[50]
After Germany led the way with starting DST (German: Sommerzeit) during World War I on April 30, 1916 together with its allies to alleviate hardships from wartime coal shortages and air raid blackouts, the political equation changed in other countries; the United Kingdom used DST first on May 21, 1916.[51] U.S. retailing and manufacturing interests led by Pittsburgh industrialist Robert Garland soon began lobbying for DST, but were opposed by railroads. The U.S.'s 1917 entry to the war overcame objections, and DST was established in 1918.[52]
The war's end swung the pendulum back. Farmers continued to dislike DST, and many countries repealed it after the war. Britain was an exception: it retained DST nationwide but over the years adjusted transition dates for several reasons, including special rules during the 1920s and 1930s to avoid clock shifts on Easter mornings. Now under a European Community directive summer time begins annually on the last Sunday in March, which may be Easter Sunday (as in 2016).[39] The U.S. was more typical: Congress repealed DST after 1919. President Woodrow Wilson, like Willett an avid golfer, vetoed the repeal twice but his second veto was overridden.[53] Only a few U.S. cities retained DST locally thereafter,[54] including New York so that its financial exchanges could maintain an hour of arbitrage trading with London, and Chicago and Cleveland to keep pace with New York.[55] Wilson's successor Warren G. Harding opposed DST as a "deception". Reasoning that people should instead get up and go to work earlier in the summer, he ordered District of Columbia federal employees to start work at 08:00 rather than 09:00 during summer 1922. Some businesses followed suit though many others did not; the experiment was not repeated.[12]
Since Germany's adoption in 1916, the world has seen many enactments, adjustments, and repeals of DST, with similar politics involved.[56]
The history of time in the United States includes DST during both world wars, but no standardization of peacetime DST until 1966.[57][58] In May 1965, for two weeks, St. Paul, Minnesota and Minneapolis, Minnesota were on different times, when the capital city decided to join most of the nation by starting Daylight Saving Time while Minneapolis opted to follow the later date set by state law.[59] In the mid-1980s, Clorox (parent of Kingsford Charcoal) and 7-Eleven provided the primary funding for the Daylight Saving Time Coalition behind the 1987 extension to U.S. DST, and both Idaho senators voted for it based on the premise that during DST fast-food restaurants sell more French fries, which are made from Idaho potatoes.[60]
In 1992, after a three-year trial of daylight saving in Queensland, Australia, a referendum on daylight saving was held and defeated with a 54.5% 'no' vote?ÿ with regional and rural areas strongly opposed, while those in the metropolitan south-east were in favor.[61] In 2005, the Sporting Goods Manufacturers Association and the National Association of Convenience Stores successfully lobbied for the 2007 extension to U.S. DST.[62] In December 2008, the Daylight Saving for South East Queensland (DS4SEQ) political party was officially registered in Queensland, advocating the implementation of a dual-time zone arrangement for daylight saving in South East Queensland while the rest of the state maintains standard time.[63] DS4SEQ contested the March 2009 Queensland state election with 32 candidates and received one percent of the statewide primary vote, equating to around 2.5% across the 32 electorates contested.[64] After a three-year trial, more than 55% of Western Australians voted against DST in 2009, with rural areas strongly opposed.[65] On April 14, 2010, after being approached by the DS4SEQ political party, Queensland Independent member Peter Wellington, introduced the Daylight Saving for South East Queensland Referendum Bill 2010 into the Queensland parliament, calling for a referendum at the next state election on the introduction of daylight saving into South East Queensland under a dual-time zone arrangement.[66] The Bill was defeated in the Queensland parliament on June 15, 2011.[67]
In the UK the Royal Society for the Prevention of Accidents supports a proposal to observe SDST's additional hour year-round, but is opposed in some industries, such as postal workers and farmers, and particularly by those living in the northern regions of the UK.[10]
In some Muslim countries, DST is temporarily abandoned during Ramadan (the month when no food should be eaten between sunrise and sunset), since the DST would delay the evening dinner. Ramadan took place in July and August in 2012. This concerns at least Morocco,[68][69] although Iran keeps DST during Ramadan.[70] Most Muslim countries do not use DST, partially for this reason.
The 2011 declaration by Russia that it would stay in DST all year long was subsequently followed by a similar declaration from Belarus.[71] Russia's plan generated widespread complaints due to the dark of wintertime morning, and thus was abandoned in 2014.[72] The country changed its clocks to Standard Time on October 26, 2014 and intends to stay there permanently.[73]
Proponents of DST generally argue that it saves energy, promotes outdoor leisure activity in the evening (in summer), and is therefore good for physical and psychological health, reduces traffic accidents, reduces crime or is good for business. Groups that tend to support DST are urban workers, retail businesses, outdoor sports enthusiasts and businesses, tourism operators, and others who benefit from having more hours of light after the end of a typical workday in the warmer months.
Opponents argue that actual energy savings are inconclusive,[75] that DST increases health risks such as heart attack,[75] that DST can disrupt morning activities, and that the act of changing clocks twice a year is economically and socially disruptive and cancels out any benefit. Farmers have tended to oppose DST.[76][77]
Having a common agreement about the day's layout or schedule confers so many advantages that a standard schedule over whole countries or large areas has generally been chosen over ad hoc efforts in which some people get up earlier and others do not.[78] The advantages of coordination are so great that many people ignore whether DST is in effect by altering their nominal work schedules to coordinate with television broadcasts or daylight.[79] DST is commonly not observed during most of winter, because the days are shorter then; workers may have no sunlit leisure time, and students may need to leave for school in the dark.[13] Since DST is applied to many varying communities, its effects may be very different depending on their culture, light levels, geography, and climate. Because of this variation, it is hard to make generalized conclusions about the absolute effects of the practice. The costs and benefits may differ from place to place. Some areas may adopt DST simply as a matter of coordination with others rather than for any direct benefits.
A 2017 meta-analysis of 44 studies found that DST leads to electricity savings of only 0.34% during the days when DST applies.[80][81] The meta-analysis furthermore found that "electricity savings are larger for countries farther away from the equator, while subtropical regions consume more electricity because of DST."[80][81] This means that DST may conserve electricity in some countries, such as Canada and the United Kingdom, but be wasteful in other places, such as Mexico, the southern United States, and northern Africa. The savings in electricity may also be offset by extra use of other types of energy, such as heating fuel.
The period of Daylight Saving Time before the longest day is shorter than the period after, in several countries including the United States and Europe. This unequal split is an energy-saving measure.[citation needed] For example, in the U.S. the period of Daylight Saving Time is defined by the Energy Policy Act of 2005. The period for Daylight Saving Time was extended by changing the start date from the first Sunday of April to the second Sunday of March, and the end date from the last Sunday in October to the first Sunday in November.
DST's potential to save energy comes primarily from its effects on residential lighting, which consumes about 3.5% of electricity in the United States and Canada.[82] Delaying the nominal time of sunset and sunrise reduces the use of artificial light in the evening and increases it in the morning. As Franklin's 1784 satire pointed out, lighting costs are reduced if the evening reduction outweighs the morning increase, as in high-latitude summer when most people wake up well after sunrise. An early goal of DST was to reduce evening usage of incandescent lighting, once a primary use of electricity.[83] Although energy conservation remains an important goal,[84] energy usage patterns have greatly changed since then. Electricity use is greatly affected by geography, climate, and economics, so the results of a study conducted in one place may not be relevant to another country or climate.[82]
Several studies have suggested that DST increases motor fuel consumption.[82] The 2008 DOE report found no significant increase in motor gasoline consumption due to the 2007 United States extension of DST.[94]
The undisputed winners of DST are the retailers, sporting goods makers, and other businesses that benefit from extra afternoon sunlight.[85] Having more hours of sunlight in between the end of the typical workday and bedtime induces customers to shop and to participate in outdoor afternoon sports.[95] People are more likely to stop by a store on their way home from work if the sun is still up.[85] In 1984, Fortune magazine estimated that a seven-week extension of DST would yield an additional $30?million for 7-Eleven stores, and the National Golf Foundation estimated the extension would increase golf industry revenues $200?million to $300?million.[96] A 1999 study estimated that DST increases the revenue of the European Union's leisure sector by about 3%.[82]
Conversely, DST can harm some farmers,[75][97] young children, who have difficulty getting enough sleep at night when the evenings are bright,[75] and others whose hours are set by the sun.[98] One reason why farmers oppose DST is that grain is best harvested after dew evaporates, so when field hands arrive and leave earlier in summer, their labor is less valuable.[9] Dairy farmers are another group who complain of the change. Their cows are sensitive to the timing of milking, so delivering milk earlier disrupts their systems.[77][99] Today some farmers' groups are in favor of DST.[100]
DST also hurts prime-time television broadcast ratings,[101][75] drive-ins and other theaters.[102]
Changing clocks and DST rules has a direct economic cost, entailing extra work to support remote meetings, computer applications and the like. For example, a 2007 North American rule change cost an estimated $500?million to $1?billion,[103] and Utah State University economist William F. Shughart II has estimated the lost opportunity cost at around US$1.7?billion.[75] Although it has been argued that clock shifts correlate with decreased economic efficiency, and that in 2000 the daylight-saving effect implied an estimated one-day loss of $31?billion on U.S. stock exchanges,[104] the estimated numbers depend on the methodology.[105] The results have been disputed,[106] and the original authors have refuted the points raised by disputers.[107]
In 1975 the U.S. DOT conservatively identified a 0.7% reduction in traffic fatalities during DST, and estimated the real reduction at 1.5% to 2%,[108] but the 1976 NBS review of the DOT study found no differences in traffic fatalities.[13] In 1995 the Insurance Institute for Highway Safety estimated a reduction of 1.2%, including a 5% reduction in crashes fatal to pedestrians.[109] Others have found similar reductions.[110] Single/Double Summer Time (SDST), a variant where clocks are one hour ahead of the sun in winter and two in summer, has been projected to reduce traffic fatalities by 3% to 4% in the UK, compared to ordinary DST.[111] However, accidents do increase by as much as 11% during the two weeks that follow the end of British Summer Time.[112] It is not clear whether sleep disruption contributes to fatal accidents immediately after the spring clock shifts.[113] A correlation between clock shifts and traffic accidents has been observed in North America and the UK but not in Finland or Sweden. If this effect exists, it is far smaller than the overall reduction in traffic fatalities.[114] A 2009 U.S. study found that on Mondays after the switch to DST, workers sleep an average of 40 minutes less, and are injured at work more often and more severely.[115]
DST likely reduces some kinds of crime, such as robbery and sexual assault, as fewer potential victims are outdoors after dusk.[116][86] Artificial outdoor lighting has a marginal and sometimes even contradictory influence on crime and fear of crime.[117]
In several countries, fire safety officials encourage citizens to use the two annual clock shifts as reminders to replace batteries in smoke and carbon monoxide detectors, particularly in autumn, just before the heating and candle season causes an increase in home fires. Similar twice-yearly tasks include reviewing and practicing fire escape and family disaster plans, inspecting vehicle lights, checking storage areas for hazardous materials, reprogramming thermostats, and seasonal vaccinations.[118] Locations without DST can instead use the first days of spring and autumn as reminders.[119]
A 2017 study in the American Economic Journal: Applied Economics estimated that "the transition into DST caused over 30 deaths at a social cost of $275 million annually," primarily by increasing sleep deprivation.[120]
DST has mixed effects on health. In societies with fixed work schedules it provides more afternoon sunlight for outdoor exercise.[122] It alters sunlight exposure; whether this is beneficial depends on one's location and daily schedule, as sunlight triggers vitamin D synthesis in the skin, but overexposure can lead to skin cancer.[123] DST may help in depression by causing individuals to rise earlier,[124] but some argue the reverse.[125] The Retinitis Pigmentosa Foundation Fighting Blindness, chaired by blind sports magnate Gordon Gund, successfully lobbied in 1985 and 2005 for U.S. DST extensions.[60][62] DST shifts are associated with higher rates of ischemic stroke in the first two days after the shift, though not in the week thereafter.[126]
Clock shifts were found to increase the risk of heart attack by 10 percent,[75] and to disrupt sleep and reduce its efficiency.[7] Effects on seasonal adaptation of the circadian rhythm can be severe and last for weeks.[127] A 2008 study found that although male suicide rates rise in the weeks after the spring transition, the relationship weakened greatly after adjusting for season.[128] A 2008 Swedish study found that heart attacks were significantly more common the first three weekdays after the spring transition, and significantly less common the first weekday after the autumn transition.[129] A 2013 review found little evidence that people slept more on the night after the fall DST shift, even though it is often described as allowing people to sleep for an hour longer than normal. The same review stated that the lost hour of sleep resulting from the spring shift appears to result in sleep loss for at least a week afterward.[130] In 2015, two psychologists recommended that DST be abolished, citing its disruptive effects on sleep as one reason for this recommendation.[131]
The government of Kazakhstan cited health complications due to clock shifts as a reason for abolishing DST in 2005.[132] In March 2011, Dmitri Medvedev, president of Russia, claimed that "stress of changing clocks" was the motivation for Russia to stay in DST all year long. Officials at the time talked about an annual increase in suicides.[133]
An unexpected adverse effect of daylight saving time may lie in the fact that an extra part of morning rush hour traffic occurs before dawn and traffic emissions then cause higher air pollution than during daylight hours.[134]
In 2017, researchers at the University of Washington and the University of Virginia reported that judges who experienced sleep deprivation as a result of DST tended to issue longer sentences.[135]
DST's clock shifts have the obvious disadvantage of complexity. People must remember to change their clocks; this can be time-consuming, particularly for mechanical clocks that cannot be moved backward safely.[136] People who work across time zone boundaries need to keep track of multiple DST rules, as not all locations observe DST or observe it the same way. The length of the calendar day becomes variable; it is no longer always 24 hours. Disruption to meetings, travel, broadcasts, billing systems, and records management is common, and can be expensive.[137] During an autumn transition from 02:00 to 01:00, a clock reads times from 01:00:00 through 01:59:59 twice, possibly leading to confusion.[138]
Damage to a German steel facility occurred during a DST transition in 1993, when a computer timing system linked to a radio time synchronization signal allowed molten steel to cool for one hour less than the required duration, resulting in spattering of molten steel when it was poured.[6] Medical devices may generate adverse events that could harm patients, without being obvious to clinicians responsible for care.[139] These problems are compounded when the DST rules themselves change; software developers must test and perhaps modify many programs, and users must install updates and restart applications. Consumers must update devices such as programmable thermostats with the correct DST rules or manually adjust the devices' clocks.[8] A common strategy to resolve these problems in computer systems is to express time using the Coordinated Universal Time (UTC) rather than the local time zone. For example, Unix-based computer systems use the UTC-based Unix time internally.
Some clock-shift problems could be avoided by adjusting clocks continuously[140] or at least more gradually[141]for example, Willett at first suggested weekly 20-minute transitionsbut this would add complexity and has never been implemented.
DST inherits and can magnify the disadvantages of standard time. For example, when reading a sundial, one must compensate for it along with time zone and natural discrepancies.[142] Also, sun-exposure guidelines such as avoiding the sun within two hours of noon become less accurate when DST is in effect.[143]
As explained by Richard Meade in the English Journal of the (American) National Council of Teachers of English, the form daylight savings time (with an "s") was already in 1978 much more common than the older form daylight saving time in American English ("the change has been virtually accomplished"). Nevertheless, even dictionaries such as Merriam-Webster's, American Heritage, and Oxford, which describe actual usage instead of prescribing outdated usage (and therefore also list the newer form), still list the older form first. This is because the older form is still very common in print and preferred by many editors. ("Although daylight saving time is considered correct, daylight savings time (with an "s") is commonly used.")[144] The first two words are sometimes hyphenated (daylight-saving(s) time). Merriam-Webster's also lists the forms daylight saving (without "time"), daylight savings (without "time"), and daylight time.[2]
In Britain, Willett's 1907 proposal[32] used the term daylight saving, but by 1911 the term summer time replaced daylight saving time in draft legislation.[74] The same or similar expressions are used in many other languages: Sommerzeit in German, zomertijd in Dutch, kes?aika in Finnish, horario de verano or hora de verano in Spanish, and heure d't in French.[51]
The name of local time typically changes when DST is observed. American English replaces standard with daylight: for example, Pacific Standard Time (PST) becomes Pacific Daylight Time (PDT). In the United Kingdom, the standard term for UK time when advanced by one hour is British Summer Time (BST), and British English typically inserts summer into other time zone names, e.g. Central European Time (CET) becomes Central European Summer Time (CEST).
The North American English mnemonic "spring forward, fall back" (also "spring ahead?...", "spring up?...", and "...?fall behind") helps people remember which direction to shift clocks.[3]
Changes to DST rules cause problems in existing computer installations. For example, the 2007 change to DST rules in North America required that many computer systems be upgraded, with the greatest impact on e-mail and calendar programs. The upgrades required a significant effort by corporate information technologists.[145]
Some applications standardize on UTC to avoid problems with clock shifts and time zone differences.[146] Likewise, most modern operating systems internally handle and store all times as UTC and only convert to local time for display.[147][148]
However, even if UTC is used internally, the systems still require external leap second updates and time zone information to correctly calculate local time as needed. Many systems in use today base their date/time calculations from data derived from the tz?database also known as zoneinfo.
The tz?database maps a name to the named location's historical and predicted clock shifts. This database is used by many computer software systems, including most Unix-like operating systems, Java, and the Oracle RDBMS;[149] HP's "tztab" database is similar but incompatible.[150] When temporal authorities change DST rules, zoneinfo updates are installed as part of ordinary system maintenance. In Unix-like systems the TZ environment variable specifies the location name, as in TZ=':America/New_York'. In many of those systems there is also a system-wide setting that is applied if the TZ environment variable is not set: this setting is controlled by the contents of the /etc/localtime file, which is usually a symbolic link or hard link to one of the zoneinfo files. Internal time is stored in timezone-independent epoch time; the TZ is used by each of potentially many simultaneous users and processes to independently localize time display.
Older or stripped-down systems may support only the TZ values required by POSIX, which specify at most one start and end rule explicitly in the value. For example, TZ='EST5EDT,M3.2.0/02:00,M11.1.0/02:00' specifies time for the eastern United States starting in 2007. Such a TZ value must be changed whenever DST rules change, and the new value applies to all years, mishandling some older timestamps.[151]
As with zoneinfo, a user of Microsoft Windows configures DST by specifying the name of a location, and the operating system then consults a table of rule sets that must be updated when DST rules change. Procedures for specifying the name and updating the table vary with release. Updates are not issued for older versions of Microsoft Windows.[152] Windows Vista supports at most two start and end rules per time zone setting. In a Canadian location observing DST, a single Vista setting supports both 1987ÿ2006 and post-2006 time stamps, but mishandles some older time stamps. Older Microsoft Windows systems usually store only a single start and end rule for each zone, so that the same Canadian setting reliably supports only post-2006 time stamps.[153]
These limitations have caused problems. For example, before 2005, DST in Israel varied each year and was skipped some years. Windows 95 used rules correct for 1995 only, causing problems in later years. In Windows 98, Microsoft marked Israel as not having DST, forcing Israeli users to shift their computer clocks manually twice a year. The 2005 Israeli Daylight Saving Law established predictable rules using the Jewish calendar but Windows zone files could not represent the rules' dates in a year-independent way. Partial workarounds, which mishandled older time stamps, included manually switching zone files every year[154] and a Microsoft tool that switches zones automatically.[155] In 2013, Israel standardized its daylight saving time according to the Gregorian calendar.[156]
Microsoft Windows keeps the system real-time clock in local time. This causes several problems, including compatibility when multi booting with operating systems that set the clock to UTC, and double-adjusting the clock when multi booting different Windows versions, such as with a rescue boot disk. This approach is a problem even in Windows-only systems: there is no support for per-user timezone settings, only a single system-wide setting. In 2008 Microsoft hinted that future versions of Windows will partially support a Windows registry entry RealTimeIsUniversal that had been introduced many years earlier, when Windows NT supported RISC machines with UTC clocks, but had not been maintained.[157] Since then at least two fixes related to this feature have been published by Microsoft.[158][159]
The NTFS file system used by recent versions of Windows stores the file with a UTC time stamp, but displays it corrected to localor seasonaltime. However, the FAT filesystem commonly used on removable devices stores only the local time. Consequently, when a file is copied from the hard disk onto separate media, its time will be set to the current local time. If the time adjustment is changed, the timestamps of the original file and the copy will be different. The same effect can be observed when compressing and uncompressing files with some file archivers. It is the NTFS file that changes seen time. This effect should be kept in mind when trying to determine if a file is a duplicate of another, although there are other methods of comparing files for equality (such as using a checksum algorithm). A ready clue is if the time stamps differ by precisely 1 hour.
A move to "permanent daylight saving time" (staying on summer hours all year with no time shifts) is sometimes advocated, and in fact is currently implemented in some jurisdictions such as Belarus,[77] some parts of Russia (e.g. Novosibirsk), Turkey, Namibia, and S?o Tom and Prncipe. It could be a result of following the timezone of a neighbouring region, political will or other causes. Advocates cite the same advantages as normal DST without the problems associated with the twice yearly time shifts. However, many remain unconvinced of the benefits, citing the same problems and the relatively late sunrises, particularly in winter, that year-round DST entails.[14]
Russia switched to permanent DST from 2011 to 2014, but the move proved unpopular because of the late sunrises in winter, so the country switched permanently back to "standard" or "winter" time in 2014 for the whole Russian Federation.[160] The United Kingdom and Ireland also experimented with year-round summer time between 1968 and 1971, and put clocks forward by an extra hour during World War II.[161]
In the IANA time zone database, permanent daylight saving time is considered standard time that has been added by an hour.
On March 23, 2018 Florida Governor Rick Scott signed legislation asking Congress to approve year-round daylight saving time in Florida.[162][163]
How many sports are there in the 2018 commonwealth games?
19 Commonwealth sports🚨The 2018 Commonwealth Games, officially known as the XXI Commonwealth Games and commonly known as Gold Coast 2018, were an international multi-sport event for members of the Commonwealth that were held on the Gold Coast, Queensland, Australia, between 4 and 15 April 2018. It was the fifth time Australia had hosted the Commonwealth Games and the first time a major multi-sport event achieved gender equality by having an equal number of events for males and female athletes.[1]
More than 4,400 athletes including 300 para-athletes from 71 Commonwealth Games Associations took part in the event.[2] The Gambia which withdrew its membership from the Commonwealth of Nations and Commonwealth Games Federation in 2013, was readmitted on 31 March 2018 and participated in the event .[3] With 275 sets of medals, the games featured 19 Commonwealth sports, including beach volleyball, para triathlon and womens rugby sevens. These sporting events took place at 14 venues in the host city, two venues in Brisbane and one venue each in Cairns and Townsville.[4]
These were the first Commonwealth Games to take place under the Commonwealth Games Federation (CGF) presidency of Louise Martin, CBE.[5] The host city Gold Coast was announced at the CGF General Assembly in Basseterre, Saint Kitts, on 11 November 2011.[6] Gold Coast became the seventh Oceanian city and the first regional city to host the Commonwealth Games. These were the eighth games to be held in Oceania and the Southern Hemisphere.
The host nation Australia topped the medal table for the fourth time in the past five Commonwealth Games, winning the most golds (80) and most medals overall (198). England and India finished second and third respectively.[7] Vanuatu, Cook Islands, Solomon Islands, British Virgin Islands and Dominica each won their first Commonwealth Games medals.[8]
On 22 August 2008, the Premier of Queensland, Anna Bligh, officially launched Gold Coast City's bid to host the Commonwealth Games in 2018. On 7 April 2009, the ABC reported a land exchange deal between Gold Coast City and State of Queensland for Carrara Stadium. According to Mayor Ron Clarke, the land would aid a potential bid for the 2018 Commonwealth Games. The land exchanged would be used as the site of an aquatics centre. In the same article, Mayor Clarke raised the question of the Australian Federal Government's commitment to a 2018 Commonwealth Games bid in light of the Government's support for Australia's 2018 FIFA World Cup Finals bid.[9] On 16 April 2009, Queensland Premier Anna Bligh told reporters that a successful Commonwealth Games bid by Gold Coast City could help the tourist strip win a role in hosting the World Cup.[10]
"Some of the infrastructure that would be built for the Commonwealth Games will be useful for Gold Coast City to get a World Cup game out of the soccer World Cup if we're successful as a nation," she said. However the decision on the venues for the 2018 and 2022 FIFA World Cups were made eleven months prior to the bid decision for the 2018 Commonwealth Games, so the potential World Cup venues had already been chosen. On 3 June 2009, Gold Coast City was confirmed as Australia's exclusive bidder vying for the 2018 Commonwealth Games.[11] "Should a bid proceed, Gold Coast City will have the exclusive Australian rights to bid as host city for 2018," Bligh stated.
"Recently I met with the president and CEO of the Australian Commonwealth Games Association and we agreed to commission a full and comprehensive feasibility study into the potential for the 2018 Commonwealth Games," she said. "Under the stewardship of Queensland Events new chair, Geoff Dixon, that study is now well advanced." On 15 March 2010, it was announced that the Queensland Government will provide initial funding of A$11 million for the 2018 Commonwealth Games bid. The Premier of Queensland has indicated the Government's support for the bid to the Australian Commonwealth Games Association.[12] On 31 March 2010, the Australian Commonwealth Games Association officially launched the bid to host the 2018 Commonwealth Games.[13] In October 2011, Gold Coast City Mayor Ron Clarke stated that the games would provide a strong legacy for the city after the games have ended.[14]
On 31 March 2010, a surprise bid was made for the 2018 Commonwealth Games by the Sri Lankan city of Hambantota. Hambantota was devastated by the 2004 Indian Ocean Tsunami, and is undergoing a major face lift. The first phase of the Port of Hambantota is nearing completion and it is funded by the government of China. The Mattala International Airport, which is the second international Airport of Sri Lanka is built close to Hambantota. A new Hambantota International Cricket Stadium had also been built, which had hosted matches in the 2011 Cricket World Cup.
On 10 November 2011, the Hambantota bidders claimed they had already secured enough votes to win the hosting rights.[15] However, on 11 November it was officially announced Gold Coast City had won the rights to host the games.[16][17]
The entire project was overseen by the Gold Coast 2018 Commonwealth Games Corporation (GOLDOC). In February 2012, Mark Peters was appointed Chief Executive Officer of the Gold Coast City 2018 Commonwealth Games Corporation.[18] The Queensland Government Minister tasked with overseeing the Games was Kate Jones.[19]
One of the key technical aspects of Gold Coast City's successful bid was the fact that the city had 80 percent of the planned venues in place before the bidding deadline. The vast majority of venues were located within 20-minutes driving time of the Athletes Village in Parkwood.
Carrara Stadium, located in the suburb of Carrara, was the main venue for Athletics, the opening ceremony and the closing ceremony. The seating capacity of the stadium was temporarily increased to 40,000 for the games by the installation of a large temporary North Stand.[20]
The Gold Coast City Convention and Exhibition Centre, located in the suburb of Broadbeach, hosted Basketball, Netball (preliminaries) and Weightlifting events, also serving as the Main Media Centre and International Broadcast centre hosting over 3000 members of the worlds press.[21] The Broadbeach Bowls Club hosted the Bowls competition.[22]
The Hinze Dam, located in the suburb of Advancetown, was the location for the Mountain Bike competition. A new course was constructed to meet international competition requirements and temporary spectator seating for 2,000 spectators.
The newly built Coomera Sport and Leisure Centre hosted Gymnastics and Netball (finals).[23] The existing sound stages of the Village Roadshow Studios complex in the suburb of Oxenford hosted the sports of Boxing, Table Tennis and Squash.[24] During Games mode the venue was enhanced to provide for the International Sporting Federation technical venue requirements and provide spectator seating of 3,000 (boxing) and 3,200 (table tennis). The Gold Coast Hockey Centre hosted the men's and women's Hockey events during the games.[25] The Southport Broadwater Parklands hosted Triathlon and athletic events.[26] The Optus aquatic centre hosted the swimming and diving events.[27]
Robina Stadium hosted the Rugby 7s competition and upgraded to meet World Rugby standards.[28] The Elanora/Currumbin Valley area hosted the road racing elements of the cycling programme. Coolangatta Beachfront hosted the beach volleyball event.[29]
Brisbane, along with the Gold Coast, forms part of the South East Queensland conurbation. Track Cycling was held at the Sleeman Sports Complex in the suburb of Chandler, where a new indoor cycling velodrome (Anna Meares Velodrome) was built. The Velodrome's seat capacity was 4,000 during the games mode.[30]
The Shooting disciplines were held at the Belmont Shooting Centre. In Tropical North Queensland, the Cairns Convention Centre and Townsville Entertainment Centre hosted the preliminary rounds of both the men's and women's basketball competitions.[31][32][33]
The Queensland state government spent A$1.5 billion (US$1.2 billion) to deliver the event. Out of this, A$550 million (US$425 million) were spent on the procurement programme. Procurement of the security and security infrastructure included contracts for four prime suppliers which delivered around 4,200 security guards. A$34 million (US$26 million) were spent on the deployment of the armed forces to provide rapid-response squads, bomb detectors, offshore patrols and surveillance. A$657 million (US$509 million) were spent for the construction of the venues and the Games Village.[34]
At a charity gala held on 4 November 2017, the medals for the games were officially unveiled. Australian Indigenous artist Delvene Cockatoo-Collins designed the medals, while they were produced by the Royal Australian Mint. The design of the medals was inspired by the coastline of Gold Coast along with Indigenous culture.[35] Furthermore, Cockatoo-Collins mentioned, "the medal design represents soft sand lines which shift with every tide and wave, also symbolic of athletic achievement, The continual change of tide represents the evolution in athletes who are making their mark, Records are made and special moments of elation are celebrated". Approximately 1,500 medals were created to be distributed to the medallists and each measures approximately 63 millimetres in diameter. The medals weigh between 138 and 163 grams.[36]
The 2018 Commonwealth Games Athletes Village was located on 59 hectares at Southport, Gold Coast.[37] which provided accommodation and services to 6,600 athletes and officials in 1252 permanent dwellings. There are 1170 one and two bedroom apartments and 82 three bedroom townhouses which will serve as student accommodation to the nearby Griffith University. It offered services like laundry, refreshments and television and computer spaces and four residential pools. The village consisted a gym which was designed with guidance from the Australian Institute of Sport and equipment and the equipment were sponsored by Technogym. Adjoining the gym was the Athlete Recovery Area which provided services like plunge baths, including accessible baths, saunas, massage and consults from the Sports Medical personnel. The Main Dining served over 18,000 meals per day to the athletes during the games. The village also consisted of retail shops, Optus phone store, salon, bar and a games room which featured a number of arcade games, pool tables and game consoles.[38]
The Gold Coast 2018 Queens Baton Relay was launched on Commonwealth Day, 13 March 2017, on the historic forecourt at Buckingham Palace, signalling the official countdown to the start of the Games. Accompanied by the Duke of Edinburgh and Prince Edward The Earl of Wessex, Her Majesty Queen Elizabeth II heralded the start of the relay by placing her message to the Commonwealth and its athletes into the distinctive loop-design Queens Baton which then set off on its journey around the globe. It traveled for 388 days, spending time in every nation and territory of the Commonwealth. The Gold Coast 2018 Queens Baton Relay was the longest in Commonwealth Games history. Covering 230,000?km over 388 days, the baton made its way through the six Commonwealth regions of Africa, the Americas, the Caribbean, Europe, Asia and Oceania.
The baton landed on Australian soil in December 2017 and then spent 100 days travelling through Australia, finishing its journey at the Opening Ceremony on 4 April 2018, where the message was removed from the Baton and read aloud by Charles, Prince of Wales.[39]
During the games period, free public transportation within Queensland region was provided to ticket and accreditation holders. The free transportation services were available on local buses, train and Gold Coast light rail (G:link) services in Gold Coast and on TransLink and Qconnect bus services in Cairns and Townsville.[40] The Gold Coast light rail system, connected a number of the key games venues including the Optus Aquatic Centre, Broadwater Parklands and the Gold Coast Convention & Exhibition Centre with the major accommodation centres of Surfers Paradise and Broadbeach and the Athletes Village at Parklands. An extension to the system was announced in October 2015, connecting the then current terminus at Gold Coast University Hospital to the railway line to Brisbane at Helensvale. The extension opened in December 2017, in time for the games.[41]
The Australian Sports Anti-Doping Authority conducted an anti-doping drive in the months prior to the games, covering around 2500 tests of Australian athletes, as well as 500 tests against international athletes. Three Australians failed drug tests in this process, along with around 20 international athletes, subject to appeal. The Commonwealth Games Federation conducted in-competition testing and, matching protocol at the Olympic Games, launched a sample storage initiative to allow for future testing of samples up to ten years later, should detection technology improve.[42]
There were 71 nations competing at 2018 Commonwealth Games.[43] Maldives were scheduled to participate, but in October 2016 they withdrew from the Commonwealth.[44] The Gambia returned to the Commonwealth Games after being readmitted as a Commonwealth Games Federation member on 31 March 2018.[3]
The regulations stated that from the 26 approved sports administered by Commonwealth Governing Bodies, a minimum of ten core sports and maximum of seventeen sports must be included in any Commonwealth Games schedule. The approved sports included the 10 core sports: athletics, badminton, boxing, hockey, lawn bowls, netball (for women), rugby sevens, squash, swimming and weightlifting. Integrated disabled competitions were also scheduled for the Games in nine sports: swimming, athletics, cycling, table tennis, powerlifting and lawn bowls. Along with these events for the first time EAD events in triathlon were held, with the medals added to the final tally for each nation. A record 38 para events were contested at these games.[45] On 8 March 2016, beach volleyball was announced as the 18th sport.[46]
The program was broadly similar to that of the 2014 Commonwealth Games, with the major changes being the dropping of judo, the reintroduction of basketball, the debut of women's rugby sevens and beach volleyball.[47]
On 7 October 2016, it was announced seven new events for women were added to the sport program, meaning there are an equal number of events for men and women. This marks the first time in history that a major multi-sport event has equality in terms of events. In total 275 events in 18 sports are being contested.[48][49]
Numbers in parentheses indicate the number of medal events contested in each sport.
The opening ceremony was held at Carrara Stadium in the Gold Coast, Australia, between 20:00 and 22:40 AEST, on 4 April 2018. Tickets for the ceremony started at 100 Australian dollars with half price tickets available for children.[50] The Head of the Commonwealth, Queen Elizabeth II, was represented by her son, Charles, Prince of Wales.[51]
Following tradition, the host of the previous games, Scotland entered first, followed by the rest of the European countries competing.[52] Following this, all countries paraded in alphabetical order from their respective regions. After the European countries entered, countries from Africa, the Americas, Asia, the Caribbean, and lastly Oceania marched in. The host nation of Australia entered last. Each nation was preceded by a placard bearer carrying a sign with the country's name.
The closing ceremony was held at Carrara Stadium on Sunday 15th April and was produced by Jack Morton Worldwide at a cost of AU$30 million. Australian pop stars Guy Sebastian, Samantha Jade, Dami Im and The Veronicas were among the performers along with children's entertainers, The Wiggles.
Prince Edward, Earl of Wessex, declared the Games closed and passed the Commonwealth Games flag to Birmingham, England which will host the 2022 Games.[53]
Only the top ten successful nations are displayed here.
The ranking in this table is consistent with International Olympic Committee convention in its published medal tables. By default, the table is ordered by the number of gold medals the athletes from a nation have won (in this context, a "nation" is an entity represented by a Commonwealth Games Association). The number of silver medals is taken into consideration next and then the number of bronze medals. If nations are still tied, equal ranking is given and they are listed alphabetically by their three-letter country code. Australia tops the medal table rank with 80 gold, second England with 45 gold and third India with 26 gold.
??*?? Host nation (Australia)
In Australia, the games were broadcast live on three Seven Network channels - 7HD, 7TWO and 7Mate.[54] In the United Kingdom, BBC provided Commonwealth Games coverage of more than 200 hours across BBC One, BBC Two, BBC Red Button, BBC Sport website, BBC iPlayer and BBC radio.[55] TVNZ channel broadcast the games for the viewers in New Zealand.[56] Astro Arena provided the games coverage for the Malaysian viewers.[57] ESPN provided the games coverage for viewers in the USA.[58] Sony Pictures Networks India broadcast the games for the viewers in India on three channels - Sony Six, Sony Ten 2 in English and Sony Ten 3 in Hindi.[59]
The official motto for the 2018 Commonwealth Games was "Share the Dream". It was chosen to highlight the dreams and experience at the games that were shared by participants of the games, ranging from athletes to volunteers and the host country Australia to the world including the Commonwealth nations.[60]
The emblem was launched on 4 April 2013, which marked exactly five years until its opening ceremony.[61] It was unveiled at the Southport Broadwater Parklands. It was designed by the New South Wales based brand consultancy WiteKite.[62] The emblem of the 2018 Commonwealth Games was a silhouette of the skyline and landscape of Gold Coast, the host city of the games.[63] Nigel Chamier OAM, who served as the Chairman of the Gold Coast 2018 Commonwealth Games Corporation till the end of the games, said that it was the result of months of market research.[64]
Borobi was named as the mascot of the 2018 Commonwealth Games in 2016. Borobi is a blue koala, with indigenous markings on its body. The term "borobi" means koala in the Yugambeh language, spoken by the indigenous Yugambeh people of the Gold Coast and surrounding areas.[65]
At least 13 athletes from four countries - Cameroon, Uganda, Rwanda, and Sierra Leone - absconded during or immediately after the Games. Some missed their competitions.[66] Athletes regularly abscond during major sporting events, and many subsequently claim asylum in their host countries. Most hold nationalities that are deemed high-risk by immigration authorities and find it impossible to get visas outside of exceptional events, such as major games.[67]
A month after the games ended, officals estimated that fifty athletes had remained in Australia illegally, with another 200 staying in the country on visas.[68][69]
The organising committee decided to bring in the athletes before the start of the closing ceremony. This caused an uproar on social media as, contrary to public expectations, none of the athletes were shown entering the stadium during the ceremony. Broadcast rights holders Channel 7 complained on air about the decision and concluded that, "it hasn't really lived up to expectations". Many spectators and athletes left during the ceremony, resulting in a half-empty stadium for much of the event.[70] Following this, the ABC claimed that Channel 7 was briefed on the closing ceremony schedule,[71] a claim which Channel 7 later refuted.[72]
How long is us air force boot camp?
eight-week program🚨United States Air Force Basic Military Training (also known as BMT or boot camp) is an eight-week program of physical and mental training required in order for an individual to become an enlisted Airman in the United States Air Force. It is located at Lackland Air Force Base in San Antonio, Texas.
Lackland Air Force Base conducts the Air Force's only enlisted recruit training program, ensuring orderly transition from civilian to military life. Recruits are trained in the fundamental skills necessary to be successful in the operational Air Force. This includes basic war skills, military discipline, physical fitness, drill and ceremonies, Air Force core values and a comprehensive range of subjects relating to Air Force life.
More than 7 million young men and women have entered Air Force basic military training since 4 February 1946, when the training mission was moved to Lackland from Harlingen Air Force Base in Harlingen, Texas. Throughout its history, Lackland's BMT program has changed in many ways to meet the operational needs of the Air Force and recent updates in the curriculum are some of the most significant in its more than 60-year history, with every aspect of the program overhauled.
On 7 November 2005, BMT changed its curriculum to focus on a new kind of Airmanone who is a "warrior first". The goal is to instill a warrior mindset in trainees from day one and better prepare Airmen for the realities of the operational Air Force.
The changes resulted from the need to meet current and future operational Air Force requirements. In September 2004, the 20th Basic Military Training Review Committee met at Lackland and recommended significant changes in the focus, curriculum and schedule.[1]
In 2011, it was revealed that a number of military training instructors engaged in inappropriate and illegal sexual relationships and advances against dozens of female trainees. The scandal led seventy-seven Congressional representatives to call for a hearing to investigate the incident.[2] Because of this incident, the commanders instilled what is called a wingman concept. This means that no trainee is allowed to go anywhere alone without another trainee. This allows trainees to lean and look after one another. It helps maintain a safer environment while in Basic Training, and when the trainees get out into the operational air force.
Military Training Instructors, or MTIs, are the instructors who are responsible for most of the training that takes place in BMT. They accompany trainees throughout the training process, instructing and correcting them in everything, from correct procedures for firing a weapon to the correct way to speak to a superior. They are known for their campaign covers typically called "Smokey the Bear" or "Smokey" hats. Unlike the Marine Corps Drill Instructors and Army Drill Sergeants, Training Instructor hats are a dark blue instead of brown.
Prior to arriving at basic training, all prospective trainees undergo a physical examination by a doctor at their local Military Entrance Processing Station, or MEPS. Trainees receive their initial weigh-in when they arrive at Lackland. If the trainee is under or over the height and weight standards, the trainee is placed on double rations if underweight (known colloquially in BMT as a "skinny"), or in a "diet" status if overweight.
All trainees receive three meals a day, also known as "chow time". These are either served at the dining facility (DFAC, also known as the "chow hall"), or as a Meal, Ready-to-Eat (MRE) during field training. Meal time may last 30 minutes or less, depending on the order the flight arrives at the chow hall. Trainees are mandated a minimum of 15 minutes to consume each meal. Much of the 15 minutes may be spent waiting in line, trainee "water monitors" reporting in the chow hall and removing, properly storing and re-donning any carried equipment.
Trainees that sustain injuries or illnesses that disrupt their training are transferred to the Medical Hold Flight. Once they are again medically fit, the trainee will generally return to their prior training squadron as part of a flight currently at an equivalent place in the training cycle that they left.
Basic Military Training is an eight-and-a-half-week cycle of training which begins with the receiving phase (also known as zero week) and ending with graduation.[3]
Inbound trainees are transported to Air Force Basic Military Training at Lackland Air Force Base. Upon arrival at Lackland, trainees will be assigned to a Squadron and a Flight. Trainees will then be rushed up to their dorm rooms where they will be given a bed and a wall locker (Personal Living Area or PLA) that they will take care of for the next eight and a half weeks. They will also be briefed on meal time procedures in the Dining Facility (DFAC) and other essential ground rules that will apply throughout the duration of Basic Training.
As the initial uniform issue is not until the following Thursday or Friday, trainees will be required to wear civilian clothes for at least one full day. During this period, trainees will be referred to as a "Rainbow Flight" or simply as "Rainbows" because of the flight's bright and varied clothing.[4] In order to break in their new boots, trainees will alternate wearing sneakers and boots with their newly issued uniforms until the end of Zero Week, earning them the nickname "Sneaker Weekers" or "Baby Flights". After first clothing issue, civilian belongings and baggage will be secured and stored for the remainder of BMT.
Trainees will undergo a urinalysis test the day after arrival. Any trainee who fails the drug test will immediately be separated from the military. The trainees are given an opportunity to phone a family member to inform them of safe arrival and mailing instructions, then are searched for contrabandthis is called a shakedown. Next, males receive their first military haircut, often called a buzz cut, where they are left essentially bald. Females are instructed in the authorized hairstyling, which allow hair to be short enough to not touch the collar or in a bun.
Zero week lasts only until the first Sunday when trainees can attend the religious service of their choice. There are a wide range of religions represented at Lackland. Sunday is the non-duty day each week in Basic Training.
The remainder of in-processing involves completion of paperwork, receiving vaccines and medical tests and the first physical fitness test.
The first test will consist of one minute timed push ups, one minute of sit ups and a timed 1.5-mile run. After the first physical fitness test, all Trainees will be put into a fitness group. If a Trainee does not meet the basic needs, he/she will be put into a remedial group that will have extra PT sessions throughout the week.
After the first physical fitness test, the MTI can issue disciplinary action in the form of typical push ups, flutter kicks and squat thrust or any other exercise that the MTI thinks is necessary as a correction for simple mistakes. Trainees will be expected to adhere to the rules by this time or face correction by physical exercise, or go through the chain of commandbeginning with the section supervisor, depending on the severity of the misconduct.
The following chart shows physical fitness achievement levels as well as the minimum requirements for graduating Air Force Basic Military Training:
This is the Pre-Deployment Phase. Here trainees will sit down with a job counselor and are shown a list of jobs they qualify forand that are availableand are instructed to prioritize that list in order of preference. The job counselors then take the preferences and the preferences of all of the other recruits in the same week of training that are in the same guaranteed aptitude area and try and work it out the best they can to give everyone the preferences they want.[5] Those who enlisted with a guaranteed job will not go through this process.
Trainees undergo extensive training with the M-16. Trainees will learn and practice the rules for safe use and handling of the weapon. They will also receive training in how to assemble, disassemble, clean and repair the M-16.
In Week 4, Trainees will attend the FEST Courses teaching them the basics needed to move onto the BEAST during the 5th WOT. Activities include basic Rifle Fighting techniques, crawling through a sand course and learning how to move in columns and use proper hand signaling. Trainees will also go through CBRNE (Chemical, Biological, Radiological, Nuclear, and high-yield Explosives) training and learn how to counter threats such as terrorism,[6] biological and chemical weapons and security breaches. Included in the CBRNE training is gas mask and MOPP gear training, where trainees will be exposed to CS Gas. Trainees also take official BMT Portraits during the fourth week[7]
During Week Five, trainees will be sent to the BEAST on the Medina Annex. The BEAST, or Basic Expeditionary Airman Skills Training, is the final test in BMT. This represents the culmination of all the skills and knowledge an Airman should possess. These skills and tactics the trainees have learned will be put to test in realistic field training exercises and combat scenarios.[8]
It is a 5-day exercise that will test all of the CBRNE skills that the trainees were taught in the fifth week of training. The trainees will also be taught basic combatives as well as engaging in pugil stick battles utilizing the rifle fighting techniques they were taught. The trainees are required to wear body armor and helmets, as well as two canteens and an M16. The site has four zones, called Reaper, Phantom, Vigilant and Sentinel, that simulate self-contained forward operating bases. Each zone is a ring of 10 field tents for barracks, centered around a three-story observation tower and a hardened briefing facility that serves as an armory and bomb shelter. The zone is defended by five sandbag defensive firing positions and trainees control access to the zone through an entry control point.[9]
The trainees will also visit CATM where they will fire and qualify using an M16A2 rifle. During the actual firing, trainees will fire at a man-sized target at 75 yards, 180 yards and 300 yards in the standing, sitting, kneeling and prone positions. A total of 24 rounds will be fired during the qualification, the targets must be hit at least 17 times in order to become qualified with the weapon and those who hit the targets at least 22 times qualify for the Small Arms Expert Ribbon.[10]
Trainees will train on their SABC, FEST and CBRNE skills to have a grasp on the basics of conducting UXO (Unexploded Ordnance) sweeps, helping a downed or injured Airman and doing the PAR and SALUTE reports they were taught during their CBRNE course the week prior. They will be taught how to identify potential threats and who to report them to in their chain of command. BEAST week will be almost entirely devoted to scenario training. As they take advantage of more field time to hone their newly acquired infantry skills, the trainees also will have more hands-on instruction in buddy care, culminating in an evaluation for each zone to see which zone has the won the coveted title of BEAST excellence which is announced at the culmex ceremony on Friday afternoon.[11] The BEAST site includes a 1.5-mile improvised explosive device (IED) trail littered with simulated roadside bombs. Trainees learn to spot IEDs and then use the trail in training scenarios. For example, under one scenario trainees make their way down the "lane" in tactical formation, trying to identify IEDs from the other debris such as soda cans. At the end of the trail, trainees are broken into teams of two "wingmen" and negotiate a combat-obstacle course (low-crawl, hide behind walls, roll behind barriers, strike dummies with the butt of your rifle, high crawl 60 yards through deep sand up a 40 percent grade). Trainees will also go participate in a CLAW mission during BEAST week. CLAW stands for Creating Leader Airmen Warriors. The CLAW mission consists of traversing a "simulated" river using the instructions and supplies on hand which stresses teamwork. They will then move in staggered tactical formation to another checkpoint where they will work on their SABC skills, then they will go through the combat obstacle course that was mentioned above. The biggest test is the village in which the Chalks of Trainees will have to either rescue a "downed" Airman and evacuate the training dummy or they will have to neutralize the targets and "simulated" gunners in the village while sustaining as few casualties as possible.[12][13] Teamwork is stressed, as the majority of tasks are completely impossible without iteach group must succeed or fail as a whole. The others will result in failure unless every trainee passes through together, requiring the team to aid its fellow trainee(s) who struggle in the accomplishment of the given mission. On the last day the trainees will have a culmex ceremony in which their MTI will award them their dog tags.
Week Six of BMT focuses primarily on post-deployment training. During this week, they will receive intensive classroom instruction about the difficulties many military members face when they return from a deployment, such as financial management, family issues and alcohol abuse. Trainees will also continue to practice drill and have dorm inspections. Trainees will also learn about Air Force history and heritage during this week.[14] The final physical fitness test is also given this week. Trainees who fail the final PT test will be transferred to the "Get Fit" flight, thereby providing the trainee extra time to improve fitness scores. The Air Force End of Course exam is also given during the Friday of the 6th week and Trainees will at some point during this week find out their AFSC if one was not already assigned when they arrived at BMT. A 70 or above is required to pass the EOC with Honor Grad candidacy being reached at 90.
On the final Thursday of BMT, all trainees will participate in the 1.5-mile run known as the "Airman's Run". The run is a victory celebration of the challenges overcome and esprit de corps gained by Airmen during training. Family and friends will be able to attend the days events and see their loved ones for the first time since the start of basic training. The Airman's Run will be followed by the Coin and Retreat Ceremony. Trainees will be presented with the Airman's Coin which signifies the transition from Trainee and have earned the right to be called an Airman. After the Airmen are dismissed, they will be able to spend the remainder of the day with loved ones.
Friday is the Graduation Parade. The flights will pass in review, take their final oath of enlistment and are then dismissed which marks the end of Air Force Basic Military Training and the beginning of an Airman's career. Family and friends will be able to see their Airman's living quarters at the dormitory, tour the base and San Antonio following the ceremony. The weekend will be spent returning to the flight in the evening and spending the daylight hours on Base Liberty or Town Pass.
Starting in March 2015, BMT was altered, graduation occurs during the seventh week and the weekend after graduation, the new Airmen are moved to the Airman's Week dorms generally into a dorm with people going to the same Technical Training bases as themselves. During this week, Airmen participate in classes and activities designed to help them become more independent thinkers and to make them aware of some of the problems facing young Airmen in both Tech Training and beyond, including mental, academic and social stresses that often affect new Airmen. During this week, Airmen are granted more freedoms, to include base liberties after duty hours and a more relaxed dining facility, with Airmen no longer being required to march and are now allowed to talk to each other and classroom sessions are mixed gender.
Monday morning following the end of Airman's Week, all Airmen will proceed to the appropriate technical training school for their Air Force Specialty Code. This technical training typically lasts anywhere from one month to over two years.[15]
Activities that are prohibited during Basic Training include (but are not limited to):
When a trainee engages in a prohibited activity, their MTI may recommend the Commander impose non-judicial punishment (UCMJ Article 15). An Article 15 is a type of disciplinary action (also known as non-judicial punishment) and can entail any or all of the following:
A trainee can be discharged from the Air Force before the conclusion of Basic Military Training. Discharges that occur before the completion of 180 days (approximately 6 months) of training are considered uncharacterized, which are neither honorable nor less than honorable.
?This article incorporates?public domain material from the United States Air Force website http://www.af.mil.
Coordinates: 29233N 983452W? / ?29.38417N 98.58111W? / 29.38417; -98.58111
When did the kraken first appear in stories?
the late-13th-century🚨
The kraken (/?kr?k?n/)[1] is a legendary cephalopod-like sea monster of giant size that is said to dwell off the coasts of Norway and Greenland. Authors over the years have postulated that the legend originated from sightings of giant squids that may grow to 13ÿ15 meters (40ÿ50 feet) in length. The sheer size and fearsome appearance attributed to the kraken have made it a common ocean-dwelling monster in various fictional works.
The English word kraken is taken from Norwegian.[2] In Norwegian Kraken is the definite form of krake, a word designating an unhealthy animal or something twisted (cognate with the English crook and crank).[3] In modern German, Krake (plural and declined singular: Kraken) means octopus, but can also refer to the legendary kraken.[4]
In the late-13th-century version of the Old Icelandic saga ?rvar-Oddr is an inserted episode of a journey bound for Helluland (Baffin Island) which takes the protagonists through the Greenland Sea, and here they spot two massive sea-monsters called Hafgufa ("sea mist") and Lyngbakr ("heather-back").[a][b] The hafgufa is believed to be a reference to the kraken:
[N]~ mun ek segja tr, at tetta eru sjskrmsl tvau, heitir annat hafgufa, en annat lyngbakr; er hann mestr allra hvala heiminum, en hafgufa er mest skrmsl skapat sjnum; er tat hennar ntt~ra, at hon gleypir b?ei menn ok skip ok hvali ok allt tat hon nir; hon er kafi, sv at d?grum skiptir, ok t hon skytr upp h?fei snu ok n?sum, t er tat aldri skemmr en sjvarfall, at hon er uppi. N~ var tat leiearsundit, er vr f܇rum millum kjapta hennar, en nasir hennar ok inn neeri kjaptrinn vru klettar teir, er yer syndiz hafinu, en lyngbakr var ey sj, er nier s?kk. En ?gmundr fl܇ki hefir sent tessi kvikvendi m܇ti tr mee fj?lkynngi sinni til tess at bana tr ok ?llum m?nnum tnum; hugei hann, at sv skyldi hafa farit fleiri sem teir, at n~ druknueu, en hann ?tlaei, at hafgufan skyldi hafa gleypt oss alla. N~ siglda ek tv gin hennar, at ek vissa, at h~n var nykomin upp.[5]
Now I will tell you that there are two sea-monsters. One is called the hafgufa [sea-mist[a]], another lyngbakr [heather-back[a]]. It [the lyngbakr] is the largest whale in the world, but the hafgufa is the hugest monster in the sea. It is the nature of this creature to swallow men and ships, and even whales and everything else within reach. It stays submerged for days, then rears its head and nostrils above surface and stays that way at least until the change of tide. Now, that sound we just sailed through was the space between its jaws, and its nostrils and lower jaw were those rocks that appeared in the sea, while the lyngbakr was the island we saw sinking down. However, Ogmund Tussock has sent these creatures to you by means of his magic to cause the death of you [Odd] and all your men. He thought more men would have gone the same way as those that had already drowned [i.e., to the lyngbakr which wasn't an island, and sank], and he expected that the hafgufa would have swallowed us all. Today I sailed through its mouth because I knew that it had recently surfaced.
After returning from Greenland, the anonymous author of the Old Norwegian natural history work Konungs skuggsj (circa 1250) described in detail the physical characteristics and feeding behavior of these beasts. The narrator proposed there must only be two in existence, stemming from the observation that the beasts have always been sighted in the same parts of the Greenland Sea, and that each seemed incapable of reproduction, as there was no increase in their numbers.
There is a fish that is still unmentioned, which it is scarcely advisable to speak about on account of its size, because it will seem to most people incredible. There are only a very few who can speak upon it clearly, because it is seldom near land nor appears where it may be seen by fishermen, and I suppose there are not many of this sort of fish in the sea. Most often in our tongue we call it hafgufa ("kraken" in e.g. Laurence M. Larson's translation[6]). Nor can I conclusively speak about its length in ells, because the times he has shown before men, he has appeared more like land than like a fish. Neither have I heard that one had been caught or found dead; and it seems to me as though there must be no more than two in the oceans, and I deem that each is unable to reproduce itself, for I believe that they are always the same ones. Then too, neither would it do for other fish if the hafgufa were of such a number as other whales, on account of their vastness, and how much subsistence that they need. It is said to be the nature of these fish that when one shall desire to eat, then it stretches up its neck with a great belching, and following this belching comes forth much food, so that all kinds of fish that are near to hand will come to present location, then will gather together, both small and large, believing they shall obtain their food and good eating; but this great fish lets its mouth stand open the while, and the gap is no less wide than that of a great sound or bight, And nor the fish avoid running together there in their great numbers. But as soon as its stomach and mouth is full, then it locks together its jaws and has the fish all caught and enclosed, that before greedily came there looking for food.[7]
Kraken were extensively described by Erik Pontoppidan, bishop of Bergen, in his Det f?rste Fors?g paa Norges naturlige Historie "The First Attempt at [a] Natural History of Norway" (Copenhagen, 1752).[8][9] Pontoppidan made several claims regarding kraken, including the notion that the creature was sometimes mistaken for an island[10] and that the real danger to sailors was not the creature itself but rather the whirlpool left in its wake.[11] However, Pontoppidan also described the destructive potential of the giant beast: "it is said that if [the creature's arms] were to lay hold of the largest man-of-war, they would pull it down to the bottom".[10][11][12] According to Pontoppidan, Norwegian fishermen often took the risk of trying to fish over kraken, since the catch was so plentiful[13] (hence the saying "You must have fished on Kraken"[14]). Pontoppidan also proposed that a specimen of the monster, "perhaps a young and careless one", was washed ashore and died at Alstahaug in 1680.[12][15] By 1755, Pontoppidan's description of the kraken had been translated into English.[16]
Swedish author Jacob Wallenberg described the kraken in the 1781 work Min son p? galejan ("My son on the galley"):
Kraken, also called the Crab-fish, which is not that huge, for heads and tails counted, he is no larger than our ?land is wide [i.e., less than 16 km] ... He stays at the sea floor, constantly surrounded by innumerable small fishes, who serve as his food and are fed by him in return: for his meal, (if I remember correctly what E. Pontoppidan writes,) lasts no longer than three months, and another three are then needed to digest it. His excrements nurture in the following an army of lesser fish, and for this reason, fishermen plumb after his resting place ... Gradually, Kraken ascends to the surface, and when he is at ten to twelve fathoms, the boats had better move out of his vicinity, as he will shortly thereafter burst up, like a floating island, spurting water from his dreadful nostrils and making ring waves around him, which can reach many miles. Could one doubt that this is the Leviathan of Job?[17]
In 1802, the French malacologist Pierre Dnys de Montfort recognized the existence of two kinds of giant octopus in Histoire Naturelle Gnrale et Particulire des Mollusques, an encyclopedic description of mollusks.[18] Montfort claimed that the first type, the kraken octopus, had been described by Norwegian sailors and American whalers, as well as ancient writers such as Pliny the Elder. The much larger second type, the colossal octopus, was reported to have attacked a sailing vessel from Saint-Malo, off the coast of Angola.[10]
Montfort later dared more sensational claims. He proposed that ten British warships, including the captured French ship of the line Ville de Paris, which had mysteriously disappeared one night in 1782, must have been attacked and sunk by giant octopuses. The British, however, knewcourtesy of a survivor from Ville de Paristhat the ships had been lost in a hurricane off the coast of Newfoundland in September 1782, resulting in a disgraceful revelation for Montfort.[13]
Since the late 18th century, kraken have been depicted in a number of ways, primarily as large octopus-like creatures, and it has often been alleged that Pontoppidan's kraken might have been based on sailors' observations of the giant squid. The kraken is also depicted to have spikes on its suckers. In the earliest descriptions, however, the creatures were more crab-like[15] than octopus-like, and generally possessed traits that are associated with large whales rather than with giant squid. Some traits of kraken resemble undersea volcanic activity occurring in the Iceland region, including bubbles of water; sudden, dangerous currents; and appearance of new islets.[citation needed]
The legend of the kraken continues to the present day, with numerous references existing in popular culture, including film, literature, television, video games and other miscellaneous examples (e.g. postage stamps, a rollercoaster ride, and a rum product).
In 1830 Alfred Tennyson published the irregular sonnet The Kraken,[19] which described a massive creature that dwells at the bottom of the sea:
Below the thunders of the upper deep;
Far far beneath in the abysmal sea,
His ancient, dreamless, uninvaded sleep
The Kraken sleepeth: faintest sunlights flee
About his shadowy sides; above him swell
Huge sponges of millennial growth and height;
And far away into the sickly light,
From many a wondrous grot and secret cell
Unnumber'd and enormous polypi
Winnow with giant arms the slumbering green.
There hath he lain for ages, and will lie
Battening upon huge seaworms in his sleep,
Until the latter fire shall heat the deep;
Then once by man and angels to be seen,
In roaring he shall rise and on the surface die.
In Herman Melville's 1851 novel Moby-Dick (Chapter 59. Squid.) the Pequod encounters what chief mate Starbuck identifies as: "The great live squid, which, they say, few whale-ships ever beheld, and returned to their ports to tell of it." Narrator Ishmael adds: "There seems some ground to imagine that the great Kraken of Bishop Pontoppodan [sic] may ultimately resolve itself into Squid." He concludes the chapter by adding: "By some naturalists who have vaguely heard rumors of the mysterious creature, here spoken of, it is included among the class of cuttle-fish, to which, indeed, in certain external respects it would seem to belong, but only as the Anak of the tribe."[20]
Pontoppidan's description influenced Jules Verne's depiction of the famous giant squid in Twenty Thousand Leagues Under the Sea from 1870.[citation needed]
John Wyndham's apocalyptic science fiction novel The Kraken Wakes depicts humanity locked in an existential struggle with ocean-dwelling aliens.
Where was the first world youth day held?
Buenos Aires, Argentina🚨World Youth Day (WYD) is an event for young people organized by the Catholic Church. The next, World Youth Day 2019, will be held in Panama.
World Youth Day was initiated by Pope John Paul II in 1985. Its concept has been influenced by the Light-Life Movement that has existed in Poland since the 1960s, where during summer camps Catholic young adults over 13 days of camp celebrated a "day of community". For the first celebration of WYD in 1986, bishops were invited to schedule an annual youth event to be held every Palm Sunday in their dioceses. It is celebrated at the diocesan level annually, and at the international level every two to three years at different locations. The 1995 World Youth Day closing Mass in the Philippines set a world record for the largest number of people gathered for a single religious event with 5 million attendees a record surpassed when 6 million attended a Mass celebrated by Pope Francis in the Philippines 20 years later in 2015.[citation needed]
World Youth Day is commonly celebrated in a way similar to many events. The most emphasized and well known traditional theme is the unity and presence of numerous different cultures. Flags and other national declarations are displayed among mainly young people to show their attendance at the events and proclaim their own themes of Catholicism. Such is usually done through chants and singing of other national songs involving a Catholic theme.
Over the course of the major events taking place, national objects are traded between pilgrims. Flags, shirts, crosses, and other Catholic icons are carried amongst pilgrims which are later traded as souvenirs to other people from different countries of the world. A unity of acceptance among people is also common, with all different cultures coming together to appreciate one another.
Other widely recognized traditions include the Pope's public appearance, commencing with his arrival around the city in the "Popemobile" and then with his final Mass held at the event. A festival in Sydney (2008) recorded an estimated distance of a 10-kilometre walk as roads and other public transport systems were closed off.
Pope Benedict XVI criticized the tendency to view WYD as a kind of rock festival; he stressed that the event should not be considered a "variant of modern youth culture" but as the fruition of a "long exterior and interior path".[1]
1987 WYD was held in Buenos Aires, Argentina. 1989 WYD took place in Santiago de Compostela, Spain. 1991 WYD was held in Cz?stochowa, Poland. 1993 WYD was celebrated in Denver, Colorado, United States.
At WYD 1995, 5 million youths gathered at Luneta Park in Manila, Philippines, an event recognized as the largest crowd ever by the Guinness World Records.[2] In an initial comment immediately following the event, Cardinal Angelo Amato, Prefect of the Congregation for the Causes of Saints, stated that over 4 million people had participated.[3]
1997 WYD was held in Paris, France. 2000 WYD took place in Rome, Italy. 2002 WYD was held in Toronto, Canada. 2005 WYD was celebrated in Cologne, Germany. Thomas Gabriel composed for the final Mass on 21 August 2005 the Missa mundi (Mass of the world), representing five continents in style and instrumentation, in a European Kyrie influenced by the style of Bach, a South American Gloria with guitars and pan flutes, an Asian Credo with sitar, an African Sanctus with drums, and an Australian Agnus Dei with didgeridoos.[4]
Sydney, Australia, was chosen as the host of the 2008 World Youth Day celebrations. At the time it was announced in 2005, WYD 2008 was commended by the then Prime Minister of Australia, Kevin Rudd, and the Archbishop of Sydney, Cardinal George Pell.[5] World Youth Day 2008 was held in Sydney, with the Papal Mass held on the Sunday at Randwick Racecourse.
The week saw pilgrims from all continents participate in the Days in the Diocese program hosted by Catholic dioceses throughout Australia and New Zealand. Pope Benedict XVI arrived in Sydney on 13 July 2008 at Richmond Air Force Base. Cardinal Pell celebrated the Opening Mass at Barangaroo (East Darling Harbour) with other activities including the re-enactment of Christ's passion during the Stations of the Cross and the pope's boat cruise through Sydney Harbour. Pilgrims participated in a variety of youth festivities including visits to St Mary's Cathedral, daily catechesis and Mass led by bishops from around the world, concerts, visits to the tomb of Saint Mary MacKillop, the Vocations Expo at Darling Harbour, reception of the Sacrament of Reconciliation, and praying before the Blessed Sacrament during Adoration. The Mass and concert at Barangaroo saw an estimated crowd of 150,000.
The event attracted 250,000 foreign visiting pilgrims to Sydney, with an estimated 400,000 pilgrims attending Mass celebrated by Pope Benedict XVI on 20 July.[6]
In May 2007, it was reported that Guy Sebastian's song "Receive the Power" had been chosen as official anthem for World Youth Day (WYD08) to be held in Sydney in 2008. The song was co-written by Guy Sebastian and Gary Pinto, with vocals by Paulini.[7][8]
"Receive the Power"[9] was used extensively throughout the six days of World Youth Day in July 2008 and also in worldwide television coverage.[10]
In November 2008, a 200-page book, Receive the Power, was launched to commemorate World Youth Day 2008.[11]
Following the celebration of Holy Mass at Randwick Racecourse in Sydney on 20 July 2008, Pope Benedict XVI announced that the next International World Youth Day 2011 would be held in Madrid, Spain. This event was held from 16ÿ21 August 2011.
There were nine official patron saints[12] for World Youth Day 2011 in addition to Blessed John Paul II: St. Isidore, St. John of the Cross, St. Mara de la Cabeza, St. John of vila, St. Teresa of vila, St. Rose of Lima, St. Ignatius of Loyola, St. Rafael Arniz, and St. Francis Xavier patron of world missions. During his address to seminarians, Benedict announced that the Spanish mystic and patron of Spanish diocesan clerics St. John of vila would become a "Doctor of the Church",[13] a designation granted to only 34 saints throughout the twenty centuries of Church history.
An estimated 2,000,000 people attended an all-night vigil to complete the week, more than expected.
Since 2002, World Youth Day has been held every three years. After the 2011 event the next World Youth Day was scheduled a year earlier than usual, in 2013 in Rio de Janeiro, Brazil, in order to avoid any conflict with the 2014 FIFA World Cup being held in 12 different host cities throughout Brazil and the 2016 Summer Olympics being held in Rio de Janeiro.
Pope Francis announced at the end of closing Mass for World Youth Day 2013 that Krak܇w, Poland, would be the venue for World Youth Day 2016.[14] An estimated three million people attended. Young people from many different countries around the world took part in the week-long event which began on 25 July 2016, and ended on 31 July 2016 with an open-air mass led by Pope Francis at Campus Misericordiae where he announced that the next World Youth Day will take place in Panama, Central America in 2019. The theme for this year's World Youth Day was "Blessed Are The Merciful, For They Shall Obtain Mercy", tying in closely with the Year of Mercy, which was initiated by Pope Francis on 8 December 2015 and concluded 20 November 2016.
At the Concluding Mass for World Youth Day 2016 in Krak܇w, Pope Francis announced that Panama City will host World Youth Day in 2019.[15]
Note 01Attendance numbers reflect the total number at the closing Mass which includes many locals who attended only that one event. Unless otherwise referenced, the numbers are quoted from the USCCB website.
Note 02This lists languages used in the main international version of the anthem. Local versions of the anthem in other languages (and alternate versions) may have also been produced.
[29]
At the diocesan level celebrations are decided by a local team usually appointed by the ordinary.
Since these celebrations usually occur during Palm Sunday, it almost always includes the Mass of Passion Sunday ÿ when Jesus' entry to Jerusalem in his final days is commemorated.
Music, prayer, reconciliation opportunities, as well as adoration of the Blessed Sacrament may also be part of the celebrations.
When was la casa de bernarda alba written?
completed on 19 June 1936🚨The House of Bernarda Alba (Spanish: La casa de Bernarda Alba) is a play by the Spanish dramatist Federico Garca Lorca. Commentators have often grouped it with Blood Wedding and Yerma as a "rural trilogy". Lorca did not include it in his plan for a "trilogy of the Spanish land" (which remained unfinished at the time of his murder).[1]
Lorca described the play in its subtitle as a drama of women in the villages of Spain. The House of Bernarda Alba was Lorca's last play, completed on 19 June 1936, two months before Lorca's death during the Spanish Civil War. The play was first performed on 8 March 1945 at the Avenida Theatre in Buenos Aires.[2][3] The play centers on the events of a house in Andalusia during a period of mourning, in which Bernarda Alba (aged 60) wields total control over her five daughters Angustias (39 years old), Magdalena (30), Amelia (27), Martirio, (24), and Adela (20). The housekeeper (La Poncia) and Bernarda's elderly mother (Mara Josefa) also live there.
The deliberate exclusion of any male character from the action helps build up the high level of sexual tension that is present throughout the play. Pepe "el Romano", the love interest of Bernarda's daughters and suitor of Angustias, never appears on stage. The play explores themes of repression, passion, and conformity, and inspects the effects of men upon women.
Upon her second husband's death, domineering matriarch Bernarda Alba imposes an eight-year mourning period on her household in accordance with her family tradition. Bernarda has five daughters, aged between 20 and 39, whom she has controlled inexorably and prohibited from any form of relationship. The mourning period further isolates them and tension mounts within the household.
After a mourning ritual at the family home, eldest daughter Angustias enters, having been absent while the guests were there. Bernarda fumes, assuming she had been listening to the men's conversation on the patio. Angustias inherited a large sum of money from her father, Bernarda's first husband, but Bernarda's second husband has left only small sums to his four daughters. Angustias' wealth attracts a young, attractive suitor from the village, Pepe el Romano. Her sisters are jealous, believing that it's unfair that plain, sickly Angustias should receive both the majority of the inheritance and the freedom to marry and escape their suffocating home environment.
Youngest sister Adela, stricken with sudden spirit and jubilation after her father's funeral, defies her mother's orders and dons a green dress instead of remaining in mourning black. Her brief taste of youthful joy suddenly shatters when she discovers that Angustias will be marrying Pepe. Poncia, Bernarda's maid, advises Adela to bide her time: Angustias will probably die delivering her first child. Distressed, Adela threatens to run into the streets in her green dress, but her sisters manage to stop her. Suddenly they see Pepe coming down the street. She stays behind while her sisters rush to get a look, until a maid hints that she could get a better look from her bedroom window.
As Poncia and Bernarda discuss the daughters' inheritances upstairs, Bernarda sees Angustias wearing makeup. Appalled that Angustias would defy her orders to remain in a state of mourning, Bernarda violently scrubs the makeup off her face. The other daughters enter, followed by Bernarda's elderly mother, Maria Josefa, who is usually locked away in her room. Maria Josefa announces that she wants to get married; she also warns Bernarda that she'll turn her daughters' hearts to dust if they cannot be free. Bernarda forces her back into her room.
It turns out that Adela and Pepe are having a secret affair. Adela becomes increasingly volatile, defying her mother and quarreling with her sisters, particularly Martirio, who reveals her own feelings for Pepe. Adela shows the most horror when the family hears the latest gossip about how the townspeople recently dealt with a young woman who shamelessly delivered and killed an illegitimate baby.
Tension explodes as family members confront one another and Bernarda pursues Pepe with a gun. A gunshot is heard outside. Martirio and Bernarda return and imply that Pepe has been killed. Adela flees into another room. With Adela out of earshot, Martirio tells everyone else that Pepe actually fled on his pony. Bernarda remarks that as a woman she can't be blamed for poor aim. A shot is heard, immediately she calls for Adela, who has locked herself into a room. When Adela doesn't respond, Bernarda and Poncia force the door open. Soon Poncia's shriek is heard. She returns with her hands clasped around her neck and warns the family not to enter the room. Adela, not knowing that Pepe survived, has hanged herself.
The closing lines of the play show Bernarda characteristically preoccupied with the family's reputation. She insists that Adela has died a virgin and demands that this be made known to the whole town. (The text implies that Adela and Pepe had an affair; Bernarda's moral code and pride keep this from registering). No one is to cry.
Film adaptations include:
In 1967, choreographer Eleo Pomare adapted the play into his ballet, Las Desenamoradas,[7] featuring music by John Coltrane.
In 2006, the play was adapted into musical form by Michael John LaChiusa. Under the title Bernarda Alba, it opened at Lincoln Center's Mitzi Newhouse Theater on March 6, 2006, starring Phylicia Rashad in the title role, with a cast that also included Daphne Rubin-Vega.[8]
In 2012, Emily Mann adapted Federico Garca Lorca's original, shifting the location from 1930s Andalusia, Spain, to contemporary Iran. Her adaptation opened at the Almeida Theatre under the director Bijan Sheibani, starring Shohreh Aghdashloo as the title character and Hara Yannas as Adela.[9]
Steven Dykes wrote a production named 'Homestead' for the American Theatre Arts (ATA) students in 2004 which was revived in 2013 (The Barn Theatre). The original production went on to perform at The Courtyard in Covent Garden, with members of an ATA graduate company Shady Dolls.
In August 2012, Hyderabad, India based theatre group Sutradhar staged Birjees Qadar Ka Kunba, an Urdu/Hindustani adaptation of The House of Bernarda Alba.[10] Directed by Vinay Varma and scripted by Dr. Raghuvir Sahay, the play adapted Lorca's original to a more Indian matriarch family setup. The play boasted of a cast of more than 10 women actors with Vaishali Bisht as Birjees Qadar (Bernard Alba) and Deepti Girotra as Hasan baandi (La Poncia).[11]
Lima, Robert. The Theatre of Garcia Lorca. New York: Las Americas Publishing Co., 1963.