Skip to content
🎉Q&A Life🥳
A similar effect occurs in the atmosphere. Project Mogul successfully used this effect to detect a nuclear explosion at a considerable distance.🚨
,How fast can sound travel in a second? James Cook,"The Hawaiian Islands (Hawaiian: Mokupuni o Hawaii) are an archipelago of eight major islands, several atolls, numerous smaller islets, and seamounts in the North Pacific Ocean, extending some 1,500 miles (2,400 kilometers) from the island of Hawai?i in the south to northernmost Kure Atoll. Formerly the group was known to Europeans and Americans as the "Sandwich Islands", a name chosen by James Cook in honor of the then First Lord of the Admiralty John Montagu, 4th Earl of Sandwich. The contemporary name is derived from the name of the largest island, Hawaii Island. The Hawaiian monarchy was overthrown by wealthy U.S. and European[1] settlers in 1893. The usurpers then established a republic, and despite opposition from the majority of the Hawaiian people,[2] successfully negotiated with the United States for annexation in 1898.[3] The U.S. state of Hawaii now occupies the archipelago almost in its entirety (including the uninhabited Northwestern Hawaiian Islands), with the sole exception of Midway Island, which instead separately belongs to the United States as one of its unincorporated territories within the United States Minor Outlying Islands. The Hawaiian Islands are the exposed peaks of a great undersea mountain range known as the HawaiianÿEmperor seamount chain, formed by volcanic activity over a hotspot in the Earth's mantle. The islands are about 1,860 miles (3,000?km) from the nearest continent.[4] Captain James Cook visited the islands on January 18, 1778 and named them the "Sandwich Islands" in honor of John Montagu, 4th Earl of Sandwich, who was one of his sponsors as the First Lord of the Admiralty.[5] This name was in use until the 1840s, when the local name "Hawaii" gradually began to take precedence.[6] The Hawaiian Islands have a total land area of 6,423.4 square miles (16,636.5?km2). Except for Midway, which is an unincorporated territory of the United States, these islands and islets are administered as Hawaiithe 50th state of the United States.[7] The eight main islands of Hawaii (also called the Hawaiian Windward Islands) are listed here. All except Kahoolawe are inhabited.[8] Smaller islands, atolls, and reefs (all west of Ni?ihau are uninhabited) form the Northwestern Hawaiian Islands, or Hawaiian Leeward Islands: The state of Hawaii counts 137 "islands" in the Hawaiian chain.[19] This number includes all minor islands and islets, or very small islands, offshore of the main islands (listed above) and individual islets in each atoll. These are just a few: This chain of islands, or archipelago, developed as the Pacific Plate moved slowly northwestward over a hotspot in the Earth's mantle at a rate of approximately 32 miles (51?km) per million years. Thus, the southeast island is volcanically active, whereas the islands on the northwest end of the archipelago are older and typically smaller, due to longer exposure to erosion. The age of the archipelago has been estimated using potassium-argon dating methods.[20] From this study and others,[21][22] it is estimated that the northwesternmost island, Kure Atoll, is the oldest at approximately 28 million years (Ma); while the southeasternmost island, Hawai?i, is approximately 0.4 Ma (400,000 years). The only active volcanism in the last 200 years has been on the southeastern island, Hawai?i, and on the submerged but growing volcano to the extreme southeast, Lo?ihi. The Hawaiian Volcano Observatory of the USGS documents recent volcanic activity and provides images and interpretations of the volcanism. Kؐlauea has been erupting nearly continuously since 1983. Almost all of the magma of the hotspot has the composition of basalt, and so the Hawaiian volcanoes are composed almost entirely of this igneous rock. There is very little coarser-grained gabbro and diabase. Nephelinite is exposed on the islands but is extremely rare. The majority of eruptions in Hawai?i are Hawaiian-type eruptions because basaltic magma is relatively fluid compared with magmas typically involved in more explosive eruptions, such as the andesitic magmas that produce some of the spectacular and dangerous eruptions around the margins of the Pacific basin. Hawai?i island (the Big Island) is the biggest and youngest island in the chain, built from five volcanoes. Mauna Loa, taking up over half of the Big Island, is the largest shield volcano on the Earth. The measurement from sea level to summit is more than 2.5 miles (4?km), from sea level to sea floor about 3.1 miles (5?km).[23] The Hawaiian Islands have many earthquakes, generally caused by volcanic activity. Most of the early earthquake monitoring took place in Hilo, by missionaries Titus Coan, Sarah J. Lyman and her family. From 1833 to 1896, approximately 4 or 5 earthquakes were reported per year.[24] Hawaii accounted for 7.3% of the United States' reported earthquakes with a magnitude 3.5 or greater from 1974 to 2003, with a total 1533 earthquakes. Hawaii ranked as the state with the third most earthquakes over this time period, after Alaska and California.[25] On October 15, 2006, there was an earthquake with a magnitude of 6.7 off the northwest coast of the island of Hawaii, near the Kona area of the big island. The initial earthquake was followed approximately five minutes later by a magnitude 5.7 aftershock. Minor-to-moderate damage was reported on most of the Big Island. Several major roadways became impassable from rock slides, and effects were felt as far away as Honolulu, Oahu, nearly 150 miles (240?km) from the epicenter. Power outages lasted for several hours to days. Several water mains ruptured. No deaths or life-threatening injuries were reported. On May 4, 2018 there was a 6.9 earthquake in the zone of volcanic activity from Kؐlauea. Earthquakes are monitored by the Hawaiian Volcano Observatory run by the USGS. The Hawaiian Islands are subject to tsunamis, great waves that strike the shore. Tsunamis are most often caused by earthquakes somewhere in the Pacific. The waves produced by the earthquakes travel at speeds of 400ÿ500 miles per hour (600ÿ800?km/h) and can affect coastal regions thousands of miles (kilometers) away. Tsunamis may also originate from the Hawaiian Islands. Explosive volcanic activity can cause tsunamis. The island of Moloka?i had a catastrophic collapse or debris avalanche over a million years ago; this underwater landslide likely caused tsunamis. The Hilina Slump on the island of Hawai?i is another potential place for a large landslide and resulting tsunami. The city of Hilo on the Big Island has been most affected by tsunamis, where the in-rushing water is accentuated by the shape of Hilo Bay. Coastal cities have tsunami warning sirens. A tsunami resulting from an earthquake in Chile hit the islands on February 27, 2010. It was relatively minor, but local emergency management officials utilized the latest technology and ordered evacuations in preparation for a possible major event. The Governor declared it a "good drill" for the next major event. A tsunami resulting from an earthquake in Japan hit the islands on March 11, 2011. It was relatively minor, but local officials ordered evacuations in preparation for a possible major event. The tsunami caused about $30.1 million in damages.[26] The islands are home to many endemic species. Since human settlement, first by Polynesians, non native trees, plants, and animals were introduced. These included species such as rats and pigs, that have preyed on native birds and invertebrates that initially evolved in the absence of such predators. The growing population of humans has also led to deforestation, forest degradation, treeless grasslands, and environmental degradation. As a result, many species which depended on forest habitats and food became extinct--with many current species facing extinction. As humans cleared land for farming, monocultural crop production replaced multi-species systems.[citation needed] The arrival of the Europeans had a more significant impact, with the promotion of large-scale single-species export agriculture and livestock grazing. This led to increased clearing of forests, and the development of towns, adding many more species to the list of extinct animals of the Hawaiian Islands. As of 2009[update], many of the remaining endemic species are considered endangered.[27] On June 15, 2006, President George W. Bush issued a public proclamation creating Papahnaumokukea Marine National Monument under the Antiquities Act of 1906. The Monument encompasses the northwestern Hawaiian Islands and surrounding waters, forming the largest[28] marine wildlife reserve in the world. In August 2010, UNESCO's World Heritage Committee added Papahnaumokukea to its list of World Heritage Sites.[29][30][31] On August 26, 2016, President Barack Obama greatly expanded Papahnaumokukea, quadrupling it from its original size.[32][33][34] The climate of the Hawaiian Islands is tropical but it experiences many different climates, depending on altitude and weather.[35] The islands receive most rainfall from the trade winds on their north and east flanks (the windward side) as a result of orographic precipitation.[35] Coastal areas in general and especially the south and west flanks or leeward sides, tend to be drier.[35] In general, the lowlands of Hawaiian Islands receive most of their precipitation during the winter months (October to April).[35] Drier conditions generally prevail from May to September.[35] The tropical storms, and occasional hurricanes, tend to occur from July through November.[35] Coordinates: 21N 157W? / ?21N 157W? / 21; -157🚨Who discovered sandwich (now hawaiian) isles?
When was the first bible written in english?
7th century🚨Partial Bible translations into languages of the English people can be traced back to the late 7th century, including translations into Old and Middle English. More than 450 translations into English have been written. The New Revised Standard Version is the version most commonly preferred by biblical scholars.[1] In the United States, 55% of survey respondents who read the Bible reported using the King James Version in 2014, followed by 19% for the New International Version, with other versions used by fewer than 10%.[2] Although John Wycliffe is often credited with the first translation of the Bible into English, there were, in fact, many translations of large parts of the Bible centuries before Wycliffe's work. The English Bible was first translated from the Latin Vulgate into Old English by a few select monks and scholars. Such translations were generally in the form of prose or as interlinear glosses (literal translations above the Latin words). Very few complete translations existed during that time. Rather, most of the books of the Bible existed separately and were read as individual texts. Thus, the sense of the Bible as history that often exists today did not exist at that time. Instead, an allegorical rendering of the Bible was more common and translations of the Bible often included the writers own commentary on passages in addition to the literal translation. Toward the end of the 7th century, the Venerable Bede began a translation of scripture into Old English (also called Anglo-Saxon). Aldhelm (c. 639ÿ709) translated the complete Book of Psalms and large portions of other scriptures into Old English. In the 10th century an Old English translation of the Gospels was made in the Lindisfarne Gospels: a word-for-word gloss inserted between the lines of the Latin text by Aldred, Provost of Chester-le-Street.[3] This is the oldest extant translation of the Gospels into the English language.[3] The Wessex Gospels (also known as the West-Saxon Gospels) are a full translation of the four gospels into a West Saxon dialect of Old English. Produced in approximately 990, they are the first translation of all four gospels into English without the Latin text. In the 11th century, Abbot ?lfric translated much of the Old Testament into Old English. The Old English Hexateuch is an illuminated manuscript of the first six books of the Old Testament without lavish illustrations and including a translation of the Book of Judges in addition to the 5 books of the Pentateuch. The Ormulum is in Middle English of the 12th century. Like its Old English precursor from ?lfric, an Abbot of Eynsham, it includes very little Biblical text, and focuses more on personal commentary. This style was adopted by many of the original English translators. For example, the story of the Wedding at Cana is almost 800 lines long, but fewer than 40 lines are the actual translation of the text. An unusual characteristic is that the translation mimics Latin verse, and so is similar to the better known and appreciated 14th-century English poem, Cursor Mundi. Richard Rolle (1290ÿ1349) wrote an English Psalter. Many religious works are attributed to Rolle, but it has been questioned how many are genuinely from his hand. Many of his works were concerned with personal devotion, and some were used by the Lollards.[4] The 14th century theologian John Wycliffe is credited with translating what is now known as Wycliffe's Bible, though it is not clear how much of the translation he himself did.[5] This translation came out in two different versions. The earlier text is characterised by a strong adherence to the word order of Latin, and might have been difficult for the layperson to comprehend. The later text made more concessions to the native grammar of English. Early Modern English Bible translations are of between about 1500 and 1800, the period of Early Modern English. This, the first major period of Bible translation into the English language, began with the introduction of the Tyndale Bible. The first complete edition of his New Testament was in 1526. Tyndale used the Greek and Hebrew texts of the New Testament (NT) and Old Testament (OT) in addition to Jerome's Latin translation. He was the first translator to use the printing press ÿ this enabled the distribution of several thousand copies of his New Testament translation throughout England. Tyndale did not complete his Old Testament translation. The first printed English translation of the whole bible was produced by Miles Coverdale in 1535, using Tyndale's work together with his own translations from the Latin Vulgate or German text. After much scholarly debate it is concluded that this was printed in Antwerp and the colophon gives the date as 4th October 1535. This first edition was adapted by Coverdale for his first "authorised version", known as the Great Bible, of 1539. Other early printed versions were the Geneva Bible (1560), notable for being the first Bible divided into verses; the Bishop's Bible (1568), which was an attempt by Elizabeth I to create a new authorised version; and the Authorized King James Version of 1611. The first complete Roman Catholic Bible in English was the DouayÿRheims Bible, of which the New Testament portion was published in Rheims in 1582 and the Old Testament somewhat later in Douay in Gallicant Flanders. The Old Testament was completed by the time the New Testament was published, but due to extenuating circumstances and financial issues was not published until nearly three decades later, in two editions, the first released in 1609, and the rest of the OT in 1610. In this version, the seven deuterocanonical books are mingled with the other books, rather than kept separate in an appendix. While early English Bibles were generally based on a small number of Greek texts, or on Latin translations, modern English translations of the Bible are based on a wider variety of manuscripts in the original languages (Greek and Hebrew). The translators put much scholarly effort into cross-checking the various sources such as the Septuagint, Textus Receptus, and Masoretic Text. Relatively recent discoveries such as the Dead Sea scrolls provide additional reference information. There is some controversy over which texts should be used as a basis for translation, as some of the alternate sources do not include phrases (or sometimes entire verses) which are found only in the Textus Receptus. Some[who?] say the alternate sources were poorly representative of the texts used in their time, whereas others[who?] claim the Textus Receptus includes passages that were added to the alternate texts improperly. These controversial passages are not the basis for disputed issues of doctrine, but tend to be additional stories or snippets of phrases. Many modern English translations, such as the New International Version, contain limited text notes indicating where differences occur in original sources.[6] A somewhat greater number of textual differences are noted in the New King James Bible, indicating hundreds of New Testament differences between the Nestle-Aland, the Textus Receptus, and the Hodges edition of the Majority Text. The differences in the Old Testament are less well documented, but do contain some references to differences between consonantal interpretations in the Masoretic Text, the Dead Sea Scrolls, and the Septuagint. Even with these hundreds of differences, however, a more complete listing is beyond the scope of most single volume Bibles (see Critical Translations below). Modern translations take different approaches to the rendering of the original languages of approaches. The approaches can usually be considered to be somewhere on a scale between the two extremes: Some translations have been motivated by a strong theological distinctive, such as the conviction that God's name be preserved in a Semitic form, seen in Sacred Name Bibles. The Purified Translation of the Bible was done to promote the idea that Jesus and early Christians did not drink wine, but grape juice. Also, the New World Translation of the Holy Scriptures was partially motivated by a conviction that Jesus was not divine, and was translated accordingly. This translation uses the name Jehovah even in places where the Greek text does not use it, but where the passage is quoting a passage from the Hebrew Old Testament. Outline of Bible-related topics While most translations are made by committees of scholars in order to avoid bias or idiosyncrasy, translations are sometimes made by individuals. The translation of J.B. Phillips (1958), The Bible in Living English (1972) by Stephen T. Byington, J.N. Darby's Darby Bible (1890), Heinz Cassirer's translation (1989), R.A. Knox (1950), Gerrit Verkuyl's Berkeley Version (1959), The Complete Jewish Bible (1998) by Dr. David H. Stern, Robert Young's Literal Translation (1862), Jay P.Green's Literal Translation (1985), The Emphatic Diaglott by Benjamin Wilson (1864), Noah Webster's Bible Translation (1833), The Original Aramaic Bible in Plain English (2010) by David Bauscher, American King James Version (1999) by Michael Engelbrite, The Living Bible (1971) by Kenneth N. Taylor, The Modern Reader's Bible (1914) by Richard Moulton, The Five Pauline Epistles, A New Translation (1900) by William Gunion Rutherford, Joseph Bryant Rotherham's Emphasized Bible (1902), Professor S. H. Hooke's The Bible in Basic English (1949), The Holy Name Bible containing the Holy Name Version of the Old and New Testaments (1963) by Angelo Traina, and Eugene H. Peterson's The Message (2002) are largely the work of individual translators. Others, such as Robert Alter, N. T. Wright and Dele Ikeorha have translated portions of the Bible. Most translations make the translators' best attempt at a single rendering of the original, relying on footnotes where there might be alternative translations or textual variants. An alternative is taken by the Amplified Bible. In cases where a word or phrase admits of more than one meaning the Amplified Bible presents all the possible interpretations, allowing the reader to choose one. For example, the first two verses of the Amplified Bible read: In the beginning God (Elohim) created [by forming from nothing] the heavens and the earth. The earth was formless and void or a waste and emptiness, and darkness was upon the face of the deep [primeval ocean that covered the unformed earth]. The Spirit of God was moving (hovering, brooding) over the face of the waters.[7] While most translations attempt to synthesize the various texts in the original languages, some translations also translate one specific textual source, generally for scholarly reasons. A single volume example for the Old Testament is The Dead Sea Scrolls Bible (ISBN?0-06-060064-0) by Martin Abegg, Peter Flint and Eugene Ulrich. The Comprehensive New Testament (ISBN?978-0-9778737-1-5) by T. E. Clontz and J. Clontz presents a scholarly view of the New Testament text by conforming to the Nestle-Aland 27th edition and extensively annotating the translation to fully explain different textual sources and possible alternative translations.[8][9] A Comparative Psalter (ISBN?0-19-529760-1) edited by John Kohlenberger presents a comparative diglot translation of the Psalms of the Masoretic Text and the Septuagint, using the Revised Standard Version and the New English Translation of the Septuagint. R. A. Knox's Translation of the Vulgate into English is another example of a single source translation. Jewish English Bible translations are modern English Bible translations that include the books of the Hebrew Bible (Tanakh) according to the masoretic text, and according to the traditional division and order of Torah, Nevi'im, and Ketuvim. Jewish translations often also reflect traditional Jewish interpretations of the Bible, as opposed to the Christian understanding that is often reflected in non-Jewish translations. For example, Jewish translations translate ???? almah in Isaiah 7:14 as young woman, while many Christian translations render the word as virgin. While modern biblical scholarship is similar for both Christians and Jews, there are distinctive features of Jewish translations, even those created by academic scholars. These include the avoidance of Christological interpretations, adherence to the Masoretic Text (at least in the main body of the text, as in the new Jewish Publication Society (JPS) translation) and greater use of classical Jewish exegesis. Some translations prefer names transliterated from the Hebrew, though the majority of Jewish translations use the Anglicized forms of biblical names. The first English Jewish translation of the Bible into English was by Isaac Leeser in the 19th century. The JPS produced two of the most popular Jewish translations, namely the JPS The Holy Scriptures of 1917 and the NJPS Tanakh (first printed in a single volume in 1985, second edition in 1999). Since the 1980s there have been multiple efforts among Orthodox publishers to produce translations that are not only Jewish, but also adhere to Orthodox norms. Among these are The Living Torah and Nach by Aryeh Kaplan and others, the Torah and other portions in an ongoing project by Everett Fox, and the ArtScroll Tanakh. The evangelical Christian Booksellers Association lists the most popular versions of the Bible sold by their members in the United States. Through 29 December 2012, the top 5 best selling translations (based on both dollar and unit sales) are as follows:[10] Sales are affected by denomination and religious affiliation. For example, the most popular Jewish version would not compete with rankings of a larger audience. Sales data can be affected by the method of marketing. Some translations are directly marketed to particular denominations or local churches, and many Christian booksellers only offer Protestant Bibles, so Catholic and Orthodox Bibles may not appear as high on the CBA rank. A study published in 2014 by The Center for the Study of Religion and American Culture at Indiana University and Purdue University found that Americans read versions of the Bible as follows:[11][12]:12ÿ15
What is the fastest sea animal on earth?
the black marlin🚨This is a list of the fastest animals in the world, grouped by types of animal. The fastest land animal is the cheetah, which has a recorded speed of 109.4ÿ120.7?km/h (68.0ÿ75.0?mph).[1] The peregrine falcon is the fastest bird and the fastest member of the animal kingdom with a diving speed of 389?km/h (242?mph).[2] The fastest animal in the sea is the black marlin, which has a recorded speed of 129?km/h (80?mph).[3] While comparing between various classes of animals, a different unit is used, body length per second for organisms. The fastest organism on earth, relative to body length, is the South Californian mite Paratarsotomus macropalpis, which has a speed of 322 body lengths per second.[4] The equivalent speed for a human running as fast as this mite would be 1,300?mph (2,092?km/h).[5] This is far in excess of the previous record holder, the Australian tiger beetle, Cicindela eburneola, the fastest insect in the world relative to body size, which has been recorded at 1.86 metres per second (6.7?km/h; 4.2?mph) or 171 body lengths per second.[6] The cheetah, the fastest land mammal, scores at only 16 body lengths per second [4] while Anna's hummingbird has the highest known length-specific velocity attained by any vertebrate 7001222222222222222?80?km/h (50?mph)[28][34] 240ÿ320?km/h (150ÿ200?mph) Avg max over fastest 10 to 20m was 45?kmh/28?mph[88] Compared to other land animals, humans are exceptionally capable of endurance, but exceptionally incapable of great speed. In the absence of significant external factors, non-athletic humans tend to walk at about 1.4?m/s (5.0?km/h; 3.1?mph) and run at about 5.1?m/s (18?km/h; 11?mph).[91][92][93] Although humans are capable of walking at speeds from nearly 0?m/s to upwards of 2.5?m/s (9.0?km/h; 5.6?mph) and running 1 mile (1.6 kilometers) in 6.5 minutes, humans typically choose to use only a small range within these speeds.[94]
What year was the first king of england crowned?
On 12 July 927🚨 Absolute monarchy (10th century ÿ 1215) The Kingdom of England (French: Royaume d'Angleterre; Danish: Kongeriget England; German: K?nigreich England) was a sovereign state on the island of Great Britain from the 10th centurywhen it emerged from various Anglo-Saxon kingdomsuntil 1707, when it united with Scotland to form the Kingdom of Great Britain. In the early 10th century the Anglo-Saxon kingdoms, united by ?thelstan (r. 927ÿ939), became part of the North Sea Empire of Cnut the Great, a personal union between England, Denmark and Norway. The Norman conquest of England in 1066 led to the transfer of the English capital city and chief royal residence from the Anglo-Saxon one at Winchester to Westminster, and the City of London quickly established itself as England's largest and principal commercial centre.[1] Histories of the kingdom of England from the Norman conquest of 1066 conventionally distinguish periods named after successive ruling dynasties: Norman 1066ÿ1154, Plantagenet 1154ÿ1485, Tudor 1485ÿ1603 and Stuart 1603ÿ1714 (interrupted by the Interregnum (England) of 1649ÿ1660). Dynastically, all English monarchs after 1066 ultimately claim descent from the Normans; the distinction of the Plantagenets is merely conventional, beginning with Henry II (reigned 1154ÿ1189) as from that time, the Angevin kings became "more English in nature"; the houses of Lancaster and York are both Plantagenet cadet branches, the Tudor dynasty claimed descent from Edward III via John Beaufort and James VI and I of the House of Stuart claimed descent from Henry VII via Margaret Tudor. The completion of the conquest of Wales by Edward I in 1284 put Wales under the control of the English crown. Edward III (reigned 1327ÿ1377) transformed the Kingdom of England into one of the most formidable military powers in Europe; his reign also saw vital developments in legislation and governmentin particular the evolution of the English parliament. From the 1340s the kings of England also laid claim to the crown of France, but after the Hundred Years' War and the outbreak of the Wars of the Roses in 1455, the English were no longer in any position to pursue their French claims and lost all their land on the continent, except for Calais. After the turmoils of the Wars of the Roses, the Tudor dynasty ruled during the English Renaissance and again extended English monarchical power beyond England proper, achieving the full union of England and the Principality of Wales in 1542. Henry VIII oversaw the English Reformation, and his daughter Elizabeth I (reigned 1558ÿ1603) the Elizabethan Religious Settlement, meanwhile establishing England as a great power and laying the foundations of the British Empire by claiming possessions in the New World. From the accession of James VI and I in 1603, the Stuart dynasty ruled England in personal union with Scotland and Ireland. Under the Stuarts, the kingdom plunged into civil war, which culminated in the execution of Charles I in 1649. The monarchy returned in 1660, but the Civil War had established the precedent that an English monarch cannot govern without the consent of Parliament. This concept became legally established as part of the Glorious Revolution of 1688. From this time the kingdom of England, as well as its successor state the United Kingdom, functioned in effect as a constitutional monarchy.[nb 5] On 1 May 1707, under the terms of the Acts of Union 1707, the kingdoms of England and Scotland united to form the Kingdom of Great Britain.[2][3] The Anglo-Saxons referred to themselves as the Engle or the Angelcynn, originally names of the Angles. They called their land Engla land, meaning "land of the English", by ?thelweard Latinized Anglia, from an original Anglia vetus, the purported homeland of the Angles (called Angulus by Bede).[4] The name Engla land became England by haplology during the Middle English period (Engle-land, Engelond).[5] The Latin name was Anglia or Anglorum terra, the Old French and Anglo-Norman one Angleterre.[6] By the 14th century, England was also used in reference to the entire island of Great Britain. The standard title for monarchs from ?thelstan until John was Rex Anglorum ("King of the English"). Canute the Great, a Dane, was the first to call himself "King of England". In the Norman period Rex Anglorum remained standard, with occasional use of Rex Anglie ("King of England"). From John's reign onwards all other titles were eschewed in favour of Rex or Regina Anglie. In 1604 James I, who had inherited the English throne the previous year, adopted the title (now usually rendered in English rather than Latin) King of Great Britain. The English and Scottish parliaments, however, did not recognise this title until the Acts of Union of 1707. The kingdom of England emerged from the gradual unification of the early medieval Anglo-Saxon kingdoms known as the Heptarchy: East Anglia, Mercia, Northumbria, Kent, Essex, Sussex, and Wessex. The Viking invasions of the 9th century upset the balance of power between the English kingdoms, and native Anglo-Saxon life in general. The English lands were unified in the 10th century in a reconquest completed by King ?thelstan in 927 CE. During the Heptarchy, the most powerful king among the Anglo-Saxon kingdoms might become acknowledged as Bretwalda, a high king over the other kings. The decline of Mercia allowed Wessex to become more powerful. It absorbed the kingdoms of Kent and Sussex in 825. The kings of Wessex became increasingly dominant over the other kingdoms of England during the 9th century. In 827, Northumbria submitted to Egbert of Wessex at Dore, briefly making Egbert the first king to reign over a united England. In 886, Alfred the Great retook London, which he apparently regarded as a turning point in his reign. The Anglo-Saxon Chronicle says that "all of the English people (all Angelcyn) not subject to the Danes submitted themselves to King Alfred."[7] Asser added that "Alfred, king of the Anglo-Saxons, restored the city of London splendidly ... and made it habitable once more."[8] Alfred's "restoration" entailed reoccupying and refurbishing the nearly deserted Roman walled city, building quays along the Thames, and laying a new city street plan.[9] It is probably at this point that Alfred assumed the new royal style 'King of the Anglo-Saxons.' During the following years Northumbria repeatedly changed hands between the English kings and the Norwegian invaders, but was definitively brought under English control by Eadred in 954, completing the unification of England. At about this time, Lothian, the northern part of Northumbria (Roman Bernicia), was ceded to the Kingdom of Scotland. On 12 July 927 the monarchs of Britain gathered at Eamont in Cumbria to recognise ?thelstan as king of the English. This can be considered England's 'foundation date', although the process of unification had taken almost 100 years. England has remained in political unity ever since. During the reign of ?thelred the Unready (978ÿ1016), a new wave of Danish invasions was orchestrated by Sweyn I of Denmark, culminating after a quarter-century of warfare in the Danish conquest of England in 1013. But Sweyn died on 2 February 1014, and ?thelred was restored to the throne. In 1015, Sweyn's son Cnut the Great (commonly known as Canute) launched a new invasion. The ensuing war ended with an agreement in 1016 between Canute and ?thelred's successor, Edmund Ironside, to divide England between them, but Edmund's death on 30 November of that year left England united under Danish rule. This continued for 26 years until the death of Harthacnut in June 1042. He was the son of Canute and Emma of Normandy (the widow of ?thelred the Unready) and had no heirs of his own; he was succeeded by his half-brother, ?thelred's son, Edward the Confessor. The Kingdom of England was once again independent. The peace lasted until the death of the childless Edward in January 1066. His brother-in-law was crowned King Harold, but his cousin William the Conqueror, Duke of Normandy, immediately claimed the throne for himself. William launched an invasion of England and landed in Sussex on 28 September 1066. Harold and his army were in York following their victory against the Norwegians at the Battle of Stamford Bridge (25 September 1066) when the news reached him. He decided to set out without delay and confront the Norman army in Sussex so marched southwards at once, despite the army not being properly rested following the battle with the Norwegians. The armies of Harold and William faced each other at the Battle of Hastings (14 October 1066), in which the English army, or Fyrd, was defeated, Harold and his two brothers were slain, and William emerged as victor. William was then able to conquer England with little further opposition. He was not, however, planning to absorb the Kingdom into the Duchy of Normandy. As a mere duke, William owed allegiance to Philip I of France, whereas in the independent Kingdom of England he could rule without interference. He was crowned on 25 December 1066 in Westminster Abbey, London. In 1092, William II led an invasion of Strathclyde, a Celtic kingdom in what is now southwest Scotland and Cumbria. In doing so, he annexed what is now the county of Cumbria to England. In 1124, Henry I ceded what is now southeast Scotland (called Lothian) to the Kingdom of Scotland, in return for the King of Scotland's loyalty. This final cession established what would become the traditional borders of England which have remained largely unchanged since then (except for occasional and temporary changes). This area of land had previously been a part of the Anglian Kingdom of Northumbria. Lothian contained what later became the Scottish capital, Edinburgh. This arrangement was later finalised in 1237 by the Treaty of York. The Duchy of Aquitaine came into personal union with the Kingdom of England upon the accession of Henry II, who had married Eleanor, Duchess of Aquitaine. The Kingdom of England and the Duchy of Normandy remained in personal union until John Lackland, Henry II's son and fifth-generation descendant of William I, lost the continental possessions of the Duchy to Philip II of France in 1204. A few remnants of Normandy, including the Channel Islands, remained in John's possession, together with most of the Duchy of Aquitaine. Up until the Norman conquest of England, Wales had remained for the most part independent of the Anglo-Saxon kingdoms, although some Welsh kings did sometimes acknowledge the Bretwalda. Soon after the Norman conquest of England, however, some Norman lords began to attack Wales. They conquered and ruled parts of it, acknowledging the overlordship of the Norman kings of England but with considerable local independence. Over many years these "Marcher Lords" conquered more and more of Wales, against considerable resistance led by various Welsh princes, who also often acknowledged the overlordship of the Norman kings of England. Edward I defeated Llywelyn ap Gruffudd, and so effectively conquered Wales, in 1282. He created the title Prince of Wales for his heir, the future Edward II, in 1301. Edward I's conquest was brutal and the subsequent repression considerable, as the magnificent Welsh castles such as Conwy, Harlech, and Caernarfon attest; but this event re-united under a single ruler the lands of Roman Britain for the first time since the establishment of the Kingdom of the Jutes in Kent in the 5th century, some 700 years before. Accordingly, this was a highly significant moment in the history of medieval England, as it re-established links with the pre-Saxon past. These links were exploited for political purposes to unite the peoples of the kingdom, including the Anglo-Normans, by popularising Welsh legends. The Welsh languagederived from the British language, continued to be spoken by the majority of the population of Wales for at least another 500 years, and is still a majority language in parts of the country. Edward III was the first English king to have a claim to the throne of France. His pursuit of the claim resulted in the Hundred Years' War (1337ÿ1453), which pitted five kings of England of the House of Plantagenet against five kings of France of the Capetian House of Valois. Extensive naval raiding was carried out by all sides during the war, often involving privateers such as John Hawley of Dartmouth or the Castilian Pero Ni?o. Though the English won numerous victories, they were unable to overcome the numerical superiority of the French and their strategic use of gunpowder weapons. England was defeated at the Battle of Formigny in 1450 and finally at the Battle of Castillon in 1453, retaining only a single town in France, Calais. During the Hundred Years' War an English identity began to develop in place of the previous division between the Norman lords and their Anglo-Saxon subjects. This was a consequence of sustained hostility to the increasingly nationalist French, whose kings and other leaders (notably the charismatic Joan of Arc) used a developing sense of French identity to help draw people to their cause. The Anglo-Normans became separate from their cousins who held lands mainly in France and mocked the former for their archaic and bastardised spoken French. English also became the language of the law courts during this period. The kingdom had little time to recover before entering the Wars of the Roses (1455ÿ1487), a series of civil wars over possession of the throne between the House of Lancaster (whose heraldic symbol was the red rose) and the House of York (whose symbol was the white rose), each led by different branches of the descendants of Edward III. The end of these wars found the throne held by the descendant of an initially illegitimate member of the House of Lancaster, married to the eldest daughter of the House of York: Henry VII and Elizabeth of York. They were the founders of the Tudor dynasty, which ruled the kingdom from 1485 to 1603. Wales retained a separate legal and administrative system, which had been established by Edward I in the late 13th century. The country was divided between the Marcher Lords, who gave feudal allegiance to the crown, and the Principality of Wales. Under the Tudor monarchy, Henry VIII replaced the laws of Wales with those of England (under the Laws in Wales Acts 1535ÿ1542). Wales was incorporated into the Kingdom of England, and henceforth was represented in the Parliament of England. During the 1530s, Henry VIII overthrew the power of the Roman Catholic Church within the kingdom, replacing the pope as head of the English Church and seizing the Church's lands, thereby facilitating the creation of a variation of Catholicism that became more Protestant over time. This had the effect of aligning England with Scotland, which also gradually adopted a Protestant religion, whereas the most important continental powers, France and Spain, remained Roman Catholic. In 1541, during Henry VIII's reign, the Parliament of Ireland proclaimed him king of Ireland, thereby bringing the Kingdom of Ireland into personal union with the Kingdom of England. Calais, the last remaining continental possession of the Kingdom, was lost in 1558, during the reign of Philip and Mary I. Their successor, Elizabeth I, consolidated the new and increasingly Protestant Church of England. She also began to build up the kingdom's naval strength, on the foundations Henry VIII had laid down. By 1588, her new navy was strong enough to defeat the Spanish Armada, which had sought to invade England to put a Catholic monarch on the throne in her place. Over 8,000 English sailors died from diseases such as dysentery and typhus whilst the Spanish Armada was at sea.[10] The House of Tudor ended with the death of Elizabeth I on 24 March 1603. James I ascended the throne of England and brought it into personal union with the Kingdom of Scotland. Despite the Union of the Crowns, the kingdoms remained separate and independent states: a state of affairs which lasted for more than a century. The Stuart kings overestimated the power of the English monarchy, and were cast down by Parliament in 1645 and 1688. In the first instance, Charles I's introduction of new forms of taxation in defiance of Parliament led to the English Civil War (1641ÿ45), in which the king was defeated, and to the abolition of the monarchy under Oliver Cromwell during the interregnum of 1649ÿ1660. Henceforth, the monarch could reign only at the will of Parliament. After the trial and execution of Charles I in January 1649, the Rump Parliament passed an act declaring England to be a Commonwealth on 19 May 1649. The monarchy and the House of Lords were abolished, and so the House of Commons became a unitary legislative chamber with a new body, the Council of State becoming the executive. However the Army remain the dominant institution in the new republic and the most prominent general was Oliver Cromwell. The Commonwealth fought wars in Ireland and Scotland which were subdued and placed under Commonwealth military occupation. In April 1653 Cromwell and the other Grandees of the New Model Army, frustrated with the members of the Rump Parliament who would not pass legislation to dissolve the Rump and to allow a new more representative parliament to be elected, stopped the Rumps session by force of arms and declared the Rump dissolved. After an experiment with a Nominated Assembly (Barebone's Parliament), the Grandees in the Army, through the Council of State imposed a new constitutional arrangement under a written constitution called the Instrument of Government. Under the Instrument of Government executive power lay with a Lord Protector (an office be held for life of the incumbent) and there were to be triennial Parliaments, with each sitting for at least five months. Article 23 of the Instrument of Government stated that Oliver Cromwell was to be the first Lord Protector. The Instrument of Government was replaced by a second constitution (the Humble Petition and Advice) under which the Lord Protector could nominate his successor. Cromwell nominated his son Richard who became Lord Protector on the death of Oliver on 3 September 1658. Richard proved to be ineffectual and was unable to maintain his rule. He resigned his title and retired into obscurity. The Rump Parliament was recalled and there was a second period where the executive power lay with the Council of state. But this restoration of Commonwealth rule similar to that before the Protectorate, proved to be unstable, and the exiled claimant, Charles II, was restored to the throne in 1660. Following the Restoration of the monarchy in 1660, an attempt by James II to reintroduce Roman Catholicisma century after its suppression by the Tudorsled to the Glorious Revolution of 1688, in which he was deposed by Parliament. The Crown was then offered by Parliament to James II's Protestant daughter and son-in-law/nephew, William III and Mary II. In the Scottish case, the attractions were partly financial and partly to do with removing English trade sanctions put in place through the Alien Act 1705. The English were more anxious about the royal succession. The death of William III in 1702 had led to the accession of his sister-in-law Anne to the thrones of England and Scotland, but her only surviving child had died in 1700, and the English Act of Settlement 1701 had given the succession to the English crown to the Protestant House of Hanover. Securing the same succession in Scotland became the primary object of English strategic thinking towards Scotland. By 1704, the Union of the Crowns was in crisis, with the Scottish Act of Security allowing for the Scottish Parliament to choose a different monarch, which could in turn lead to an independent foreign policy during a major European war. The English establishment did not wish to risk a Stuart on the Scottish throne, nor the possibility of a Scottish military alliance with another power. A Treaty of Union was agreed on 22 July 1706, and following the Acts of Union of 1707, which created the Kingdom of Great Britain, the independence of the kingdoms of England and Scotland came to an end on 1 May 1707. The Acts of Union created a customs union and monetary union and provided that any "laws and statutes" that were "contrary to or inconsistent with the terms" of the Acts would "cease and become void". The English and Scottish Parliaments were merged into the Parliament of Great Britain, located in Westminster, London. At this point England ceased to exist as a separate political entity, and since then has had no national government. The laws of England were unaffected, with the legal jurisdiction continuing to be that of England and Wales, while Scotland continued to have its own laws and law courts. This continued after the 1801 union between the kingdoms of Great Britain and Ireland, forming the United Kingdom of Great Britain and Ireland. In 1922 the Irish Free State seceded from the United Kingdom, leading to the latter being renamed the United Kingdom of Great Britain and Northern Ireland. The counties of England were established for administration by the Normans, in most cases based on earlier shires established by the Anglo-Saxons. They ceased to be used for administration only with the creation of the administrative counties in 1889.[11][12] Unlike the partly self-governing boroughs that covered urban areas, the counties of medieval England existed primarily as a means of enforcing central government power, enabling monarchs to exercise control over local areas through their chosen representatives ÿ originally Sheriffs and later the Lord Lieutenants ÿ and their subordinate Justices of the Peace.[13] Counties were used initially for the administration of justice, collection of taxes and organisation of the military, and later for local government and electing parliamentary representation.[14][15] Some outlying counties were from time to time accorded palatine status with some military and central government functions vested in a local noble or bishop. The last such, the County Palatine of Durham, did not lose this special status until the 19th century. Although all of England was divided into shires by the time of the Norman conquest, some counties were formed considerably later, up to the 16th century. Because of their differing origins the counties varied considerably in size. The county boundaries were fairly static between the 16th century Laws in Wales acts and the Local Government Act 1888.[16] Each shire was responsible for gathering taxes for the central government; for local defence; and for justice, through assize courts.[17] The power of the feudal barons to control their landholding was considerably weakened in 1290 by the statute of Quia Emptores. Feudal baronies became perhaps obsolete (but not extinct) on the abolition of feudal tenure during the Civil War, as confirmed by the Tenures Abolition Act 1660 passed under the Restoration which took away Knights service and other legal rights. Tenure by knight-service was abolished and discharged and the lands covered by such tenures, including once-feudal baronies, were henceforth held by socage (i.e. in exchange for monetary rents). The English Fitzwalter Case in 1670 ruled that barony by tenure had been discontinued for many years and any claims to a peerage on such basis, meaning a right to sit in the House of Lords, were not to be revived, nor any right of succession based on them. The Statute of Rhuddlan in 1284 followed the conquest of Wales by Edward I of England. It assumed the lands held by the Princes of Gwynedd under the title "Prince of Wales" as legally part of the lands of England, and established shire counties on the English model over those areas. The Marcher Lords were progressively tied to the English kings by the grants of lands and lordships in England. The Council of Wales and the Marches, administered from Ludlow Castle, was initially established in 1472 by Edward IV of England to govern the lands held under the Principality of Wales.[18] and the bordering English counties. It was abolished in 1689. Under the Laws in Wales Acts 1535ÿ1542 introduced under Henry VIII, the jurisdiction of the marcher lords was abolished in 1536. The Acts had the effect of annexing Wales to England and creating a single state and legal jurisdiction, commonly referred to as England and Wales. At the same time the Council of Wales was created in 1472, a Council of the North was set up for the northern counties of England. After falling into disuse, it was re-established in 1537 and abolished in 1641. A very short-lived Council of the West also existed for the West Country between 1537 and 1540.
Which are the smallest bones in the body?
The stapes🚨The stapes /?ste?pi?z/ or stirrup is a bone in the middle ear of humans and other mammals which is involved in the conduction of sound vibrations to the inner ear. The stirrup-shaped small bone is on and transmits these to the oval window, medially. The stapes is the smallest and lightest named bone in the human body, and is so-called because of its resemblance to a stirrup (Latin: Stapes). The stapes is the third bone of the three ossicles in the middle ear. The stapes is a stirrup-shaped bone, and the smallest in the human body. It rests on the oval window, to which it is connected by an annular ligament. The stapes is described as having a base, resting on the oval window, as well as a head that articulates with the incus. These are connected by anterior and posterior limbs (Latin: crura).[1]:862 The stapes articulates with the incus through the incudostapedial joint.[2] The stapes is the smallest bone in the human body, and measures roughly 3 x 2.5mm, greater along the head-base span.[3] The stapes develops from the second pharyngeal arch during the sixth to eighth week of embryological life. The central cavity of the stapes, the obturator foramen, is due to the presence embryologically of the stapedial artery, which usually regresses.[2][4] The stapes is one of three ossicles in mammals. In non-mammalian four-legged animals, the bone homologous to the stapes is usually called the columella; however, in reptiles, either term may be used. In fish, the homologous bone is called the hyomandibular, and is part of the gill arch supporting either the spiracle or the jaw, depending on the species. The equivalent term in amphibians is the pars media plectra.[2][5]:481ÿ482 The stapes appears to be relatively constant in size in different ethnic groups.[6] In 0.01-0.02% of people, the stapedial artery does not regress, and persists in the central foramen.[7] In this case, a pulsatile sound may be heard in the affected ear, or there may be no symptoms at all.[8] Rarely, the stapes may be completely absent.[9][10]:262 Situated between the incus and the inner ear, the stapes transmits sound vibrations from the incus to the oval window, a membrane-covered opening to the inner ear. The stapes is also stabilized by the stapedius muscle, which is innervated by the facial nerve.[1]:861ÿ863 Otosclerosis is a congenital or spontaneous-onset disease characterized by abnormal bone remodeling in the inner ear. Often this causes the stapes to adhere to the oval window, which impedes its ability to conduct sound, and is a cause of conductive hearing loss. Clinical otosclerosis is found in about 1% of people, although it is more common in forms that do not cause noticeable hearing loss. Otosclerosis is more likely in young age groups, and females.[11] Two common treatments are stapedectomy, the surgical removal of the stapes and replacement with an artificial prosthesis, and stapedotomy, the creation of a small hole in the base of the stapes followed by the insertion of an artificial prosthesis into that hole.[12] :661 Surgery may be complicated by a persistent stapedial artery, fibrosis-related damage to the base of the bone, or obliterative otosclerosis, resulting in obliteration of the base.[7][10] :254ÿ262 The stapes is commonly described as having been discovered by the professor Giovanni Filippo Ingrassia in 1546 at the University of Naples,[13] although this remains the nature of some controversy, as Ingrassia's description was published posthumously in his 1603 anatomical commentary In Galeni librum de ossibus doctissima et expectatissima commentaria. Spanish anatomist Pedro Jimeno is first to have been credited with a published description, in Dialogus de re medica (1549).[14] The bone is so-named because of its resemblance to a stirrup (Latin: stapes), an example of a late Latin word, probably created in mediaeval times from "to stand" (Latin: stapia), as stirrups did not exist in the early Latin-speaking world.[15]
What happened to the last king of gondor?
disappeared🚨This is a list of the ruling kings of Gondor, one of the realms in Middle-earth in the fantasy works of J. R. R. Tolkien. The kings of Gondor claimed descent through Amandil from the Lords of And~ni?, and from there to Silmari?n and the Kings of N~menor. The line of Kings began with Elendil, who fled the downfall of N~menor with his sons Isildur and Anrion and established the twin realms-in-exile of Arnor and Gondor.[1] For several hundred years after its foundation, Gondor was ruled by the High-King of both Arnor and Gondor, but after Isildur's death early in the Third Age, the connection between the two kingdoms was severed and Gondor was ruled independently of Arnor. The Line of Kings in Gondor continued through the descendants of Anrion for over two thousand years. Several calamities befell the house, such as the civil war of the Kin-Strife from TA 1432 to 1447 and the death of the King and his close family in the Great Plague of TA 1636.[2] In addition, through inter-marriage over several generations the N~menorean blood of the Kings of Gondor was mingled with that of lesser men of Middle-Earth. After King Ondoher and his two sons were slain in battle with the Wainriders, Arvedui (heir of the North Kingdom) claimed the throne of Gondor.[3] Arvedui's claim rested on his descent from Isildur and his marriage to Friel, the only surviving child of Ondoher. His claim was rejected by Gondor, who elected instead E?rnil II, a male descendant of Telumehtar and victor over the Wainriders. The line of Kings finally came to an end in TA 2050 when the last King of Gondor, E?rnur son of E?rnil II, disappeared after riding to answer the challenges of the Witch-King in Minas Morgul.[4] Thereafter Gondor was ruled by the Line of the Stewards until the return of King Aragorn II. Each king was a son of the previous king, unless otherwise indicated. (interregnum 1944 - 1945) (interregnum 2050 - 3019) In Peter Jackson's The Lord of the Rings: The Fellowship of the Ring, both Elendil and his son Isildur are shown in the opening prologue depicting the War of the Last Alliance. Aragorn II is also portrayed by Viggo Mortensen in all three of the trilogy's films. There is however an inconsistency in the film adaptation when it is mentioned that Isildur was the Last King of Gondor, while in Tolkien's official works the line ended with E?rnur when he disappeared in TA 2050. In The Lord of the Rings Online, there are several appearances made by a few of the Kings of Gondor on this list as Non-Player Characters. Aragorn appears at many in-game locations as either a leader of the D~nedain rangers or as a member of the fellowship of the ring before his crowning as King, and can be interacted with by the player for many different story line quests. Other Kings of Gondor that make appearances include Isildur in a session instance where the player temporarily controls a soldier of Gondor near the end of the second age, as well as E?rnur who appears during the Epic Story-line and group instance play as a wraith under the influence of the Witch-King called Mordirith. Games Workshop produces miniatures of Aragorn, Elendil, and Isildur for The Lord of the Rings: Strategy Battle Game.[8] Aragorn appears in both of The Lord of the Rings: The Battle for Middle-Earth real-time strategy games, as well as King E?rnur who makes a brief campaign appearance during The Lord of the Rings: The Battle for Middle Earth II: The Rise of the Witch-King expansion pack.
What was australia called when the first fleet arrived?
Botany Bay🚨The First Fleet was the 11 ships that departed from Portsmouth, England, on 13 May 1787 to found the penal colony that became the first European settlement in Australia. The Fleet consisted of two Royal Navy vessels, three store ships and six convict transports, carrying between 1,000 and 1,500 convicts, marines, seamen, civil officers and free people (accounts differ on the numbers), and a large quantity of stores. From England, the Fleet sailed southwest to Rio de Janeiro, then east to Cape Town and via the Great Southern Ocean to Botany Bay, arriving over the period of 18 to 20 January 1788, taking 250 to 252 days from departure to final arrival. Convicts were originally transported to the Thirteen Colonies in North America, but after the American War of Independence ended in 1783, the newly formed United States refused to accept further convicts.[1] On 6 December 1785, Orders in Council were issued in London for the establishment of a penal colony in New South Wales, on land claimed for Britain by explorer James Cook in his first voyage to the Pacific in 1770.[2][3] The First Fleet was commanded by Commodore Arthur Phillip, who was given instructions authorising him to make regulations and land grants in the colony.[4] The ships arrived at Botany Bay between 18 January and 20 January 1788:[5] HMS Supply arrived on 18 January, Alexander, Scarborough and Friendship arrived on 19 January, and the remaining ships on 20 January.[6][7] The cost to Britain of outfitting and despatching the Fleet was S84,000[8] (about S9.6 million as of 2015).[9] The First Fleet included two Royal Navy escort ships, the ten-gun sixth-rate vessel HMS?Sirius under the command of Captain John Hunter, and the armed tender HMS?Supply commanded by Lieutenant Henry Lidgbird Ball. Ropes, crockery, agricultural equipment and a miscellany of other stores were needed. Items transported included tools, agricultural implements, seeds, spirits, medical supplies, bandages, surgical instruments, handcuffs, leg irons and a prefabricated wooden frame for the colony's first Government House.[12] The party had to rely on its own provisions to survive until it could make use of local materials, assuming suitable supplies existed, and grow its own food and raise livestock. Scale models of all the ships are on display at the Museum of Sydney. The models were built by ship makers Lynne and Laurie Hadley, after researching the original plans, drawings and British archives. The replicas of the Supply, Charlotte, Scarborough, Friendship, Prince of Wales, Lady Penrhyn, Borrowdale, Alexander, Sirius?(1786), Fishburn and Golden Grove are made from Western Red or Syrian Cedar.[13] Nine Sydney harbour ferries built in the mid-1980s are named after First Fleet vessels. The unused names are Lady Penrhyn and Prince of Wales. The people of the fleet included seamen, marines and their families, government officials, and a large number of convicts, including women and children. The majority were British, but there were also African, American and French convicts on board.[14][15] The convicts had committed a variety of crimes, including theft, perjury, fraud, assault, and robbery, for which they had variously been sentenced to penal transportation for 7 years, 14 years, or the term of their natural life.[16][17] The six convict transports each had a detachment of marines on board. Most of the families of the marines traveled aboard the Prince of Wales.[18] A number of people on the First Fleet kept diaries and journals of their experiences, including the surgeons. There are twelve known journals in existence as well as some letters.[19] The exact number of people directly associated with the First Fleet will likely never be established, as accounts of the event vary slightly. A total of 1,420 people have been identified as embarking on the First Fleet in 1787, and 1,373 are believed to have landed at Sydney Cove in January 1788. In her biographical dictionary of the First Fleet, Mollie Gillen gives the following statistics:[20] While the names of all crew members of Sirius and Supply are known, the six transports and three storeships may have carried as many as 110 more seamen than have been identified ÿ no complete musters have survived for these ships. The total number of persons embarking on the First Fleet would, therefore, be approximately 1,530 with about 1,483 reaching Sydney Cove. Other sources indicate that the passengers consisted of 10 civil officers, 212 marines, including officers, 28 wives and 17 children of the marines, 81 free people, 504 male convicts and 192 female convicts; making the total number of free people 348 and the total number of prisoners 696, coming to a grand total of 1,044 people. According to the first census of 1788 as reported by Governor Phillip to Lord Sydney, the white population of the colony was 1,030 and the colony also consisted of 7 horses, 29 sheep, 74 swine, 6 rabbits, and 7 cattle.[21] The following statistics were provided by Governor Phillip:[22] David Collins' book An Account of the English Colony in New South Wales gives the following details:[23] The Alexander, of 453 tons, had on board 192 male convicts; 2 lieutenants, 2 sergeants, 2 corporals, 1 drummer, and 29 privates, with 1 assistant surgeon to the colony. The Scarborough, of 418 tons, had on board 205 male convicts; 1 captain, 2 lieutenants, 2 sergeants, 2 corporals, 1 drummer, and 26 privates, with 1 assistant surgeon to the colony. The Charlotte, of 346 tons, had on board 89 male and 20 female convicts; 1 captain, 2 lieutenants, 2 sergeants, 3 corporals, 1 drummer, and 35 privates, with the principal surgeon of the colony. The Lady Penrhyn, of 338 tons, had on board 101 female convicts; 1 captain, 2 lieutenants, and 3 privates, with a person acting as a surgeon's mate. The Prince of Wales, of 334 tons, had on board 2 male and 50 female convicts; 2 lieutenants, 3 sergeants, 2 corporals, 1 drummer, and 24 privates, with the surveyor-general of the colony. The Friendship, of 228 tons, had on board 76 male and 21 female convicts; 1 captain, 2 lieutenants, 2 sergeants, 3 corporals, 1 drummer, and 36 privates, with 1 assistant surgeon to the colony. There were on board, beside these, 28 women, 8 male and 6 female children, belonging to the soldiers of the detachment, together with 6 male and 7 female children belonging to the convicts. The Fishburn store-ship was of 378 tons; the Borrowdale of 272 tons; and the Golden Grove of 331 tons. Golden Grove carried the chaplain for the colony, with his wife and a servant. Not only these store-ships, but the men of war and transports were laden with provisions, implements of agriculture, camp equipage, clothing for the convicts, baggage, etc. The Sirius carried as supernumeraries, the major commandant of the corps of marines embarked in the transports* [*This officer was also lieutenant-governor of the colony], the adjutant and quarter-master, the judge-advocate of the settlement, and the commissary; with one sergeant, three drummers, seven privates, four women, and a few artificers. The chief surgeon for the First Fleet, John White, reported a total of 48 deaths and 28 births during the voyage. The deaths during the voyage included one marine, one marine's wife, one marine's child, 36 male convicts, four female convicts, and five children of convicts.[24] The First Fleet left Portsmouth, England on 13 May 1787.[25] The journey began with fine weather, and thus the convicts were allowed on deck.[26] The Fleet was accompanied by the armed frigate Hyena until it left English waters.[27] On 20 May 1787, one convict on the Scarborough reported a planned mutiny; those allegedly involved were flogged and two were transferred to Prince of Wales.[27] In general, however, most accounts of the voyage agree that the convicts were well behaved.[27] On 3 June 1787, the fleet anchored at Santa Cruz at Tenerife.[25] Here, fresh water, vegetables and meat were brought on board. Phillip and the chief officers were entertained by the local governor, while one convict tried unsuccessfully to escape.[28] On 10 June they set sail to cross the Atlantic to Rio de Janeiro,[25] taking advantage of favourable trade winds and ocean currents. The weather became increasingly hot and humid as the Fleet sailed through the tropics. Vermin, such as rats, and parasites such as bedbugs, lice, cockroaches and fleas, tormented the convicts, officers and marines. Bilges became foul and the smell, especially below the closed hatches, was over-powering.[29] While Phillip gave orders that the bilge-water was to be pumped out daily and the bilges cleaned, these orders were not followed on the Alexander and a number of convicts fell sick and died.[29] Tropical rainstorms meant that the convicts could not exercise on deck as they had no change of clothes and no method of drying wet clothing.[29] Consequently, they were kept below in the foul, cramped holds. On the female transports, promiscuity between the convicts, the crew and marines was rampant, despite punishments for some of the men involved.[29] In the doldrums, Phillip was forced to ration the water to three pints a day.[29] The Fleet reached Rio de Janeiro on 5 August and stayed for a month.[25] The ships were cleaned and water taken on board, repairs were made, and Phillip ordered large quantities of food.[26] The women convicts' clothing had become infested with lice and was burnt. As additional clothing for the female convicts had not arrived before the Fleet left England,[26] the women were issued with new clothes made from rice sacks. While the convicts remained below deck, the officers explored the city and were entertained by its inhabitants.[30] A convict and a marine were punished for passing forged quarter-dollars made from old buckles and pewter spoons. The Fleet left Rio de Janeiro on 4 September to run before the westerlies to the Cape of Good Hope in southern Africa, which it reached on 13 October.[31] This was the last port of call, so the main task was to stock up on plants, seeds and livestock for their arrival in Australia.[32] The livestock taken on board from the Cape of Good Hope destined for the new colony included two bulls, seven cows, one stallion, three mares, 44 sheep, 32 pigs, four goats and "a very large quantity of poultry of every kind".[33] Women convicts on the Friendship were moved to other transports to make room for livestock purchased there. The convicts were provided with fresh beef and mutton, bread and vegetables, to build up their strength for the journey and maintain their health.[32] The Dutch colony of Cape Town was the last outpost of European settlement which the fleet members would see for years, perhaps for the rest of their lives. "Before them stretched the awesome, lonely void of the Indian and Southern Oceans, and beyond that lay nothing they could imagine."[34] Assisted by the gales in the "Roaring Forties" latitudes below the 40th parallel, the heavily laden transports surged through the violent seas. In the last two months of the voyage, the Fleet faced challenging conditions, spending some days becalmed and on others covering significant distances; the Friendship travelled 166 miles one day, while a seaman was blown from the Prince of Wales at night and drowned.[35] Water was rationed as supplies ran low, and the supply of other goods including wine ran out altogether on some vessels.[35] Van Diemen's Land was sighted from the Friendship on 4 January 1788.[35] A freak storm struck as they began to head north around the island, damaging the sails and masts of some of the ships. On 25 November, Phillip had transferred to the Supply. With Alexander, Friendship and Scarborough, the fastest ships in the Fleet, which were carrying most of the male convicts, the Supply hastened ahead to prepare for the arrival of the rest. Phillip intended to select a suitable location, find good water, clear the ground, and perhaps even have some huts and other structures built before the others arrived. This was a planned move, discussed by the Home Office and the Admiralty prior to the Fleet's departure.[36] However, this "flying squadron" reached Botany Bay only hours before the rest of the Fleet, so no preparatory work was possible.[37] Supply reached Botany Bay on 18 January 1788; the three fastest transports in the advance group arrived on 19 January; slower ships, including Sirius, arrived on 20 January.[38] This was one of the world's greatest sea voyages ÿ eleven vessels carrying about 1,487 people and stores[33] had travelled for 252 days for more than 15,000 miles (24,000?km) without losing a ship. Forty-eight people died on the journey, a death rate of just over three per cent. It was soon realised that Botany Bay did not live up to the glowing account that the explorer Captain James Cook had provided.[39] The bay was open and unprotected, the water was too shallow to allow the ships to anchor close to the shore, fresh water was scarce, and the soil was poor.[40] First contact was made with the local indigenous people, the Eora, who seemed curious but suspicious of the newcomers. The area was studded with enormously strong trees. When the convicts tried to cut them down, their tools broke and the tree trunks had to be blasted out of the ground with gunpowder. The primitive huts built for the officers and officials quickly collapsed in rainstorms. The marines had a habit of getting drunk and not guarding the convicts properly, whilst their commander, Major Robert Ross, drove Phillip to despair with his arrogant and lazy attitude. Crucially, Phillip worried that his fledgling colony was exposed to attack from Aborigines or foreign powers. Although his initial instructions were to establish the colony at Botany Bay, he was authorised to establish the colony elsewhere if necessary.[41] On 21 January, Phillip and a party which included John Hunter, departed the Bay in three small boats to explore other bays to the north.[43] Phillip discovered that Port Jackson, about 12 kilometres to the north, was an excellent site for a colony with sheltered anchorages, fresh water and fertile soil.[43] Cook had seen and named the harbour, but had not entered it.[43] Phillip's impressions of the harbour were recorded in a letter he sent to England later: "the finest harbour in the world, in which a thousand sail of the line may ride in the most perfect security ...". The party returned to Botany Bay on 23 January.[43] On the morning of 24 January, the party was startled when two French ships were seen just outside Botany Bay. This was a scientific expedition led by Jean-Fran?ois de La Prouse. The French had expected to find a thriving colony where they could repair ships and restock supplies, not a newly arrived fleet of convicts considerably more poorly provisioned than themselves.[44] There was some cordial contact between the French and British officers, but Phillip and La Prouse never met. The French ships remained until 10 March before setting sail on their return voyage. They were not seen again and were later discovered to have been shipwrecked off the coast of Vanikoro in the present-day Solomon Islands.[45] On 26 January 1788, the Fleet weighed anchor and sailed to Port Jackson.[25] The site selected for the anchorage had deep water close to the shore, was sheltered, and had a small stream flowing into it. Phillip named it Sydney Cove, after Lord Sydney the British Home Secretary.[43] This date is celebrated as Australia Day, marking the beginning of British settlement.[46] The British flag was planted and formal possession taken. This was done by Phillip and some officers and marines from the Supply, with the remainder of Supply's crew and the convicts observing from on board ship. The remaining ships of the Fleet did not arrive at Sydney Cove until later that day.[47] The First Fleet encountered indigenous Australians when they landed at Botany Bay. The Cadigal people of the Botany Bay area witnessed the Fleet arrive and six days later the two ships of French explorer La Prouse sailed into the bay.[48] When the Fleet moved to Sydney Cove seeking better conditions for establishing the colony, they encountered the Eora people, including the Bidjigal clan. A number of the First Fleet journals record encounters with Aboriginal people.[49] Although the official policy of the British Government was to establish friendly relations with Aboriginal people,[41] and Arthur Phillip ordered that the Aboriginal people should be well treated, it was not long before conflict began. The colonists did not sign treaties with the original inhabitants of the land.[50] Between 1790 and 1810, Pemulwuy of the Bidjigal clan led the local people in a series of attacks against the British colonisers.[51] The ships of the First Fleet mostly did not remain in the colony. Some returned to England, while others left for other ports. Some remained at the service of the Governor of the colony for some months: some of these were sent to Norfolk Island where a second penal colony was established. 1788 1789 1790: On 26 January 1842, the Colonial Government in Sydney awarded a life pension of 1 shilling a day to three surviving members of the First Fleet. The Sydney Gazette and New South Wales Advertiser reported, on Saturday 29 January 1842: "The Government have ordered a pension of one shilling per diem to be paid to the survivors of those who came by the first vessel into the Colony. The number of these really 'old hands' is now reduced to three, of whom, two are now in the Benevolent Asylum, and the other is a fine hale old fellow, who can do a day's work with more spirit than many of the young fellows lately arrived in the Colony."[71] The names of the three recipients are not given. William Hubbard: Hubbard was convicted in the Kingston Assizes in Surrey, England, on 24 March 1784 for theft.[72] He was transported to Australia on the Scarborough in the First Fleet. He married Mary Goulding on 19 December 1790 in Rose Hill. In 1803 he received a land grant of 70 acres at Mulgrave Place. He died on 18 May 1843 at the Sydney Benevolent Asylum. His age was given as 76 when he was buried at Christ Church St. Lawrence, Sydney on 22 May 1843. John McCarthy: McCarthy was a Marine who sailed on the Friendship.[73] McCarthy was born in Killarney, County Kerry, Ireland, circa Christmas 1745. He first served in the colony of New South Wales, then at Norfolk Island where he took up a land grant of 60 acres (Lot 110). He married the first fleet convict Ann Beardsley on Norfolk Island in November 1791 after his discharge a month earlier. In 1808, on the close of Norfolk Island settlement, he resettled in Van Diemen's Land and later took a land grant (80 acres at Melville) in lieu of the one forfeited on Norfolk Island. The last few years of his life were spent at the home of his granddaughter and her husband, Mr. and Mrs. William H. Budd, at a place called Kinlochewe Inn near Donnybrook, Victoria. McCarthy died on 24 July 1846,[74] six months past his 100 birthday. John Limeburner: The South Australian Register reported, in an article dated Wednesday 3 November 1847: "John Limeburner, the oldest colonist in Sydney, died in September last, at the advanced age of 104 years. He helped to pitch the first tent in Sydney, and remembered the first display of the British flag there, which was hoisted on a swamp oak-tree, then growing on a spot now occupied as the Water-Police Court. He was the last of those called the 'first-fleeters' (arrivals by the first convict ships) and, notwithstanding his great age, retained his faculties to the last."[75] John Limeburner was a convict on the Charlotte. He was convicted on 9 July 1785 at New Sarum, Wiltshire of theft of a waistcoat, a shirt and stockings.[76] He married Elizabeth Ireland in 1790 at Rosehill and together they establish a 50-acre farm at Prospect.[77] He died at Ashfield in September 1847 and is buried at St John's, Ashfield. John Jones: Jones was a Marine on the First Fleet and sailed on the Alexander. He is listed in the N.S.W. 1828 Census as aged 82 and living at the Sydney Benevolent Asylum.[78] He is said to have died at the Benevolent Asylum in 1848.[79] Samuel King: King was a scribbler (a worker in a scribbling mill[80]) before he became a Marine. He was a Marine with the First Fleet on board the flagship Sirius?(1786).[81] He shipped to Norfolk Island on Golden Grove in September 1788, where he lived with Mary Rolt, a convict who arrived with the First Fleet on the Prince of Wales. He received a grant of 60 acres (Lot No. 13) at Cascade Stream in 1791. Mary Rolt returned to England on the Britannia in October 1796. King was resettled in Van Diemen's Land, boarding the City of Edinburgh on 3 September 1808, and landed in Hobart on 3 October.[82] He married Elizabeth Thackery on 28 January 1810. He died on 21 October 1849 at 86 years of age and was buried in the Wesleyan cemetery at Lawitta Road, Back River. John Small: Convicted 14 March 1785 at the Devon Lent Assizes held at Exeter for Robbery King's Highway. Sentenced to hang, reprieved to 7 years transportation. Arrived on the Charlotte in First Fleet 1788. Certificate of freedom 1792. Land Grant 1794, 30 acre "Small's Farm" at Eastern Farms (Ryde). Married October 1788 Mary Parker also a First Fleet convict who arrived on Lady Penrhyn. John Small died on 2 October 1850 at age of 90 years.[83][84] Elizabeth Thackery: Elizabeth "Betty" King (ne Thackery) was tried and convicted of theft on 4 May 1786 at Manchester Quarter Sessions, and sentenced to seven years transportation. She sailed on the Friendship, but was transferred to the Charlotte at the Cape of Good Hope. She was shipped to Norfolk Island on the Sirius?(1786) in 1790 and lived there with James Dodding. In August 1800 she bought 10 acres of land from Samuel King at Cascade Stream. Elizabeth and James were relocated to Van Diemen's Land in December 1807[85] but parted company sometime afterwards. On 28 January 1810 Elizabeth married "First Fleeter" Private Samuel King (above) and lived with him until his death in 1849. Betty King died in New Norfolk, Tasmania on 7 August 1856, aged 89 years. She is buried in the churchyard of the Methodist Chapel, Lawitta Road, Back River, next to her husband, and the marked grave bears a First Fleet plaque. She was one of the first British women to land in Australia and was the last "First Fleeter" to die. Historians have disagreed over whether those aboard the First Fleet were responsible for introducing smallpox to Australia's indigenous population, and if so, whether this was the consequence of deliberate action. In 1914, J.?H.?L. Cumpston, director of the Australian Quarantine Service put forward the hypothesis that smallpox arrived with British settlers.[86] Some researchers have argued that any such release may have been a deliberate attempt to decimate the indigenous population.[87][88] Others have suggested that live smallpox virus may have been introduced accidentally, when Aboriginal people came into contact with variolous matter brought by the First Fleet for use in anti-smallpox inoculations.[89][90][91] Hypothetical scenarios for such an action might have included: an act of revenge by an aggrieved individual, a response to attacks by indigenous people,[92] or part of an orchestrated assault by the New South Wales Marine Corps, intended to clear the path for colonial expansion.[93][94] Other historians have disputed the idea that there was a deliberate release of smallpox virus and/or suggest that it arrived with visitors to Australia other than the First Fleet.[95][96][97][98][99] In 2002, historian Judy Campbell suggested that smallpox had arrived in Australia through contact with fishermen from Makassar in Indonesia, where smallpox was endemic.[97][100] In 2011, Macknight stated: The overwhelming probability must be that it [smallpox] was introduced, like the later epidemics, by [Indonesian] trepangers ... and spread across the continent to arrive in Sydney quite independently of the new settlement there.[101] There is a third theory, that the 1789 epidemic was not smallpox but chickenpox ÿ to which indigenous Australians also had no inherited resistance ÿ that happened to be affecting, or was carried by, members of the First Fleet.[102][103] This theory has also been disputed.[104][105] After Ray Collins, a stonemason, completed years of research into the First Fleet, he sought approval from about nine councils to construct a commemorative garden in recognition of these immigrants. Liverpool Plains Shire Council was ultimately the only council to accept his offer to supply the materials and construct the garden free of charge. The site chosen was a disused caravan park on the banks of Quirindi Creek at Wallabadah, New South Wales. In September 2002 Collins commenced work on the project. Additional support was later provided by Neil McGarry in the form of some signs and the council contributed $28,000 for pathways and fencing. Collins hand-chiseled the names of all those who came to Australia on the eleven ships in 1788 on stone tablets along the garden pathways. The stories of those who arrived on the ships, their life, and first encounters with the Australian country are presented throughout the garden.[106] On 26 January 2005, the First Fleet Garden was opened as the major memorial to the First Fleet immigrants. Previously the only other specific memorial to the First Fleeters was an obelisk at Brighton-Le-Sands, New South Wales.[107] The surrounding area has a barbecue, tables, and amenities.
How long did it take for felix to fall from space?
4 minutes and 19 seconds🚨 Felix Baumgartner (German: [?fe?l?ks ?ba??m?a??tn?]; born 20 April 1969) is an Austrian skydiver, daredevil, and BASE jumper.[1] He is best known for jumping to Earth from a helium balloon in the stratosphere on 14 October 2012. Doing so, he set world records for skydiving an estimated 39?km (24?mi), reaching an estimated top speed of 1,357.64?km/h (843.6?mph), or Mach 1.25.[a][b] He became the first person to break the sound barrier without vehicular power relative to the surface on his descent.[12][13] He broke skydiving records for exit altitude, vertical freefall distance without drogue, and vertical speed without drogue. Though he still holds the two latter records, the first was broken two years later, when on 24 October 2014, Alan Eustace jumped from 135,890 feet ÿ or, 41.42?km (25.74?mi) with a drogue.[14][15][16] Baumgartner is also renowned for the particularly dangerous nature of the stunts he has performed during his career. Baumgartner spent time in the Austrian military where he practiced parachute jumping, including training to land on small target zones. Felix Baumgartner was born first of two boys on 20 April 1969 (his brother is Gerard), in Salzburg, Austria.[17] As a child, he dreamed about flying and skydiving.[18] In 1999, he claimed the world record for the highest parachute jump from a building when he jumped from the Petronas Towers in Kuala Lumpur, Malaysia.[19] On 20 July 2003, Baumgartner became the first person to skydive across the English Channel using a specially made carbon fiber wing.[1][20] Alban Geissler, who developed the SKYRAY carbon fiber wing with Christoph Aarns, suggested after Baumgartner's jump that the wing he used was a copy of two prototype SKYRAY wings sold to Red Bull (Baumgartner's sponsor) two years earlier.[21] Baumgartner also set the world record for the lowest BASE jump ever, when he jumped 29 metres (95?ft) from the hand of the Christ the Redeemer statue in Rio de Janeiro.[22] This jump also stirred controversy among BASE jumpers who pointed out that Baumgartner cited the height of the statue as the height of the jump even though he landed on a slope below the statue's feet, and that other BASE jumpers had previously jumped from the statue but avoided publicity.[23] He became the first person to BASE jump from the completed Millau Viaduct in France on 27 June 2004[24] and the first person to skydive onto, then BASE jump from, the Turning Torso building in Malm?, Sweden, on 18 August 2006.[25] On 12 December 2007, he became the first person to jump from the 91st floor observation deck of the then-tallest completed building in the world, Taipei 101 in Taipei, Taiwan.[26] In January 2010, it was reported that Baumgartner was working with a team of scientists and sponsor Red Bull to attempt the highest sky-dive on record.[27] On 15 March 2012, Baumgartner completed the first of 2 test jumps from 21,818 metres (71,581?ft). During the jump, he spent approximately 3 minutes and 43 seconds in free fall, reaching speeds of more than 580?km/h (360?mph),[28] before opening his parachute. In total, the jump lasted approximately eight minutes and eight seconds and Baumgartner became the third person to safely parachute from a height of over 21.7?km (13.5?mi).[29][30] On 25 July 2012, Baumgartner completed the second of two planned test jumps from 29,460 metres (96,640?ft). It took Baumgartner about 90 minutes to reach the target altitude and his free fall was estimated to have lasted three minutes and 48 seconds before his parachutes were deployed.[31] The launch was originally scheduled for 9 October 2012 but was aborted due to adverse weather conditions. Launch was rescheduled and the mission instead took place on 14 October 2012 when Baumgartner landed in eastern New Mexico after jumping from a then world-record 38,969.3 metres (127,852 feet)[12][32][33] and falling a record distance of 36,402.6 metres (119,431 feet); the altitude record was broken by Alan Eustace in 2014.[34] Baumgartner also set the record for fastest speed of free fall at 1,357.64?km/h (843.6?mph),[2][12][5] making him the first human to break the sound barrier outside a vehicle.[35][36] Baumgartner was in free fall for 4 minutes and 19 seconds, 17 seconds short of mentor Joseph Kittinger's 1960 jump.[35] Baumgartner initially struggled with claustrophobia after spending time in the pressurized suit required for the jump, but overcame it with help from a sports psychologist and other specialists.[37][38][39] In 2014, Baumgartner decided to join Audi Motorsport to drive an Audi R8 LMS for the 2014 24 Hours of Nurburgring after racing Volkswagen Polos in 2013. He underwent another intense physical and driver training session to prepare him for the race.[40] He helped the team to a 9th place overall finish.[41] In October 2012, when Baumgartner was asked in an interview with the Austrian newspaper Kleine Zeitung whether a political career was an option for his future life, he stated that the "example of Arnold Schwarzenegger" showed, that "you can't move anything in a democracy" and that he would opt for a "moderate dictatorship [...] led by experienced personalities coming from the private (sector of the) economy". He finally stated he "didn't want to get involved in politics."[42][43][44] On 6 November 2012, Baumgartner was convicted of battery and was fined ?1500 after slapping the face of a Greek truck driver, following a petty argument between the two men.[45][46] In January 2016, Baumgartner provoked a stir of critical news coverage in his home country after posting several critical remarks against refugees and recommending the Hungarian Prime Minister Viktor Orbn for the Nobel Peace Prize.[47] Later on, Baumgartner endorsed the presidential candidate of the right-wing populist Freedom Party of Austria, Norbert Hofer.[48] On 13 July 2016, Facebook deleted his fan page of 1.5 million fans. Baumgartner subsequently claimed that he must have become "too uncomfortable" for "political elites".[49] After Austrian authorities refused to grant sports tax breaks to Baumgartner, he moved to Arbon, Switzerland, whereupon his house in Salzburg and his helicopter were seized.[50] Baumgartner dated Playboy German playmate of the century Gitta Saxx. Later he was engaged to Nicole ?ttl, a model and former beauty queen (Miss Lower Austria 2006). They broke up in 2013.[50] His mother is named Eva, and he has one brother, Gerald Baumgartner.[44][51][c][52][53] Since 2014, he has been in a relationship with Romanian television presenter Mihaela R?dulescu.[54]
Who established mission san francisco de la espada?
Spain🚨Mission San Francisco de la Espada (also Mission Espada) is a Roman Rite Catholic mission established in 1690 by Spain in present-day San Antonio, Texas, in what was then known as northern New Spain. The mission was built in order to convert local Native Americans to Christianity and solidify Spanish territorial claims in the New World against encroachment from France.[1] Today, the structure is one of four missions that comprise San Antonio Missions National Historical Park. Founded in 1690 as San Francisco de los Tejas near Weches, Texas and southwest of present-day Alto, Texas, Mission San Francisco de la Espada was the second mission established Three priests, three soldiers and supplies left among the Nabedache Indians. The new mission was dedicated on June 1, 1690. A smallpox epidemic in the winter of 1690-1691 killed an estimated 3,300 people in the area. The Nabedache believed the Spaniards brought the disease and hostilities developed between the two groups. Drought besieged the mission in the summers of 1691 and 1692, and the Nabedache wished to get rid of the mission. Under threat of personal attack, the priests began packing their belongings in the fall of 1693. On October 25, 1693, the padres burned the mission and retreated toward Monclova. The party lost its way and did not reach Monclova until February 17, 1694.[2] The mission was re-established in the same area on July 5, 1716 as Nuestro Padre San Francisco de los Tejas. The new mission had to be abandoned in 1719 because of conflict between Spain and France. The mission was tried once more on August 5, 1721 as San Francisco de los Neches. As the Nabedache were no longer interested in the mission, and France had abandoned effort to lay claim in the area, the mission was temporarily relocated along the Colorado River in July 1730. Mission Tejas State Park encompasses the original site of the mission. The mission relocated to its current location in the San Antonio River area (coordinates 29.3177, -98.4498) in March, 1731 and was renamed San Francisco De la Espada. A friary was built in 1745, and the church was completed in 1756. The relocation was in part inspired by fears of French encroachment and need for more Missionaries to tend to San Antonio de Bexar's Indian population.[3] The Mission encountered great difficulties in presiding over the Indian population and experienced common rebellious activity.[4] Several modern churches have been architecturally based on the design of this mission including St. Stephen's Episcopal Church in Wimberley, Texas, north of San Antonio. Rancho de las Cabras was established between 1750 and 1760, 30 miles southeast of San Antonio de Bexar under the jurisdiction of Mission Espada, so as to provide land for cultivation of crops and livestock for the Missions population without intruding on private lands.[5] The ranch was primarily made up by low fences and thatched buildings known as jacales for the native workforce to inhabit.[5] According to Ethno-Historian T.N. Campbell, the ranch was likely constructed by Indians not native to Texas.[5] Mission San Francisco de la Espada's acequia and aqueduct can still be seen today. The main ditch continues to carry water to the mission and its former farm lands. This water is still used by residents living on these neighboring lands. The use of acequias was originally brought to the arid regions of Spain and Portugal by the Romans and the Moors. When Franciscans missionaries arrived in the desert Southwest they found the system worked well in the hot, dry environment. In order to distribute water to the missions along the San Antonio River, Franciscan missionaries oversaw the construction of seven gravity-flow ditches, dams, and at least one aqueduct a 15-mile (24?km) network that irrigated approximately 3,500 acres (14?km2) of land. The Espada aqueduct as it crosses Piedras creek Interior of the church Nativity scene, 2009. Nativity scene, 2011. Mission San Francisco de Espada, San Antonio, Texas (postcard, 1901-1907)
When was the pledge of allegiance first used?
October 12, 1892🚨The Pledge of Allegiance of the United States is an expression of allegiance to the Flag of the United States and the republic of the United States of America. It was originally composed by Captain George Thatcher Balch, a Union Army Officer during the Civil War and later a teacher of patriotism in New York City schools.[3][4] The form of the pledge used today was largely devised by Francis Bellamy in 1892, and formally adopted by Congress as the pledge in 1942.[5] The official name of The Pledge of Allegiance was adopted in 1945. The most recent alteration of its wording came on Flag Day in 1954, when the words "under God" were added.[6] Congressional sessions open with the recital of the Pledge, as do many government meetings at local levels, and meetings held by many private organizations. All states except Hawaii, Iowa, Vermont and Wyoming require a regularly-scheduled recitation of the pledge in the public schools, although the Supreme Court has ruled in West Virginia State Board of Education v. Barnette that students cannot be compelled to recite the Pledge, nor can they be punished for not doing so.[7] In a number of states, state flag pledges of allegiance are required to be recited after this.[8] The United States Flag Code says: The Pledge of Allegiance to the Flag"I pledge allegiance to the Flag of the United States of America, and to the Republic for which it stands, one Nation under God, indivisible, with liberty and justice for all."[9]should be rendered by standing at attention facing the flag with the right hand over the heart. When not in uniform men should remove any non-religious headdress with their right hand and hold it at the left shoulder, the hand being over the heart. Persons in uniform should remain silent, face the flag, and render the military salute. Members of the Armed Forces not in uniform and veterans may render the military salute in the manner provided for persons in uniform.[2] The Pledge of Allegiance, as it exists in its current form, was composed in August 1892 by Francis Bellamy (1855ÿ1931), who was a Baptist minister, a Christian socialist,[10][11] and the cousin of socialist utopian novelist Edward Bellamy (1850ÿ1898). There did exist a previous version created by Rear Admiral George Balch, a veteran of the Civil War, who later become auditor of the New York Board of Education. Balch's pledge, which existed contemporaneously with the Bellamy version until the 1923 National Flag Conference, read: We give our heads and hearts to God and our country; one country, one language, one flag! Balch was a proponent of teaching children, especially those of immigrants, loyalty to the United States, even going so far as to write a book on the subject and work with both the government and private organizations to distribute flags to every classroom and school.[12] Balch's pledge, which predates Bellamy's by 5 years [which year?] and was embraced by many schools, by the Daughters of the American Revolution until the 1910s, and by the Grand Army of the Republic until the 1923 National Flag Conference, is often overlooked when discussing the history of the Pledge.[13] Bellamy, however, did not approve of the pledge as Balch had written it, referring to the text as "too juvenile and lacking in dignity."[14] The Bellamy "Pledge of Allegiance" was first published in the September 8 issue of the popular children's magazine The Youth's Companion as part of the National Public-School Celebration of Columbus Day, a celebration of the 400th anniversary of Christopher Columbus's arrival in the Americas. The event was conceived and promoted by James B. Upham, a marketer for the magazine, as a campaign to instill the idea of American nationalism in students and to sell flags to public schools.[15] According to author Margarette S. Miller, this campaign was in line both with Upham's patriotic vision as well as with his commercial interest. According to Miller, Upham "would often say to his wife: 'Mary, if I can instill into the minds of our American youth a love for their country and the principles on which it was founded, and create in them an ambition to carry on with the ideals which the early founders wrote into The Constitution, I shall not have lived in vain.'"[16] Bellamy's original Pledge read: I pledge allegiance to my Flag and the Republic for which it stands, one nation, indivisible, with liberty and justice for all.[1][17] The Pledge was supposed to be quick and to the point. Bellamy designed it to be recited in 15 seconds. As a socialist, he had initially also considered using the words equality and fraternity[15] but decided against it, knowing that the state superintendents of education on his committee were against equality for women and African Americans.[18] Francis Bellamy and Upham had lined up the National Education Association to support the Youth's Companion as a sponsor of the Columbus Day observance and the use in that observance of the American flag. By June 29, 1892, Bellamy and Upham had arranged for Congress and President Benjamin Harrison to announce a proclamation making the public school flag ceremony the center of the Columbus Day celebrations. This arrangement was formalized when Harrison issued Presidential Proclamation 335. Subsequently, the Pledge was first used in public schools on October 12, 1892, during Columbus Day observances organized to coincide with the opening of the World's Columbian Exposition (the Chicago World's Fair), Illinois.[19] In his recollection of the creation of the Pledge, Francis Bellamy said, "At the beginning of the nineties patriotism and national feeling was (sic) at a low ebb. The patriotic ardor of the Civil War was an old story?... The time was ripe for a reawakening of simple Americanism and the leaders in the new movement rightly felt that patriotic education should begin in the public schools."[14] James Upham "felt that a flag should be on every schoolhouse,"[14] so his publication "fostered a plan of selling flags to schools through the children themselves at cost, which was so successful that 25,000 schools acquired flags in the first year (1892-93).[14] As the World's Columbian Exposition was set to celebrate the 400th anniversary the arrival of Christopher Columbus in the Americas, Upham sought to link the publication's flag drive to the event, "so that every school in the land?... would have a flag raising, under the most impressive conditions."[14] Bellamy was placed in charge of this operation and was soon lobbying "not only the superintendents of education in all the States, but [he] also worked with governors, Congressmen, and even the President of the United States."[14] The publication's efforts paid off when Benjamin Harrison declared Wednesday October 12, 1892, to be Columbus Day for which The Youth's Companion made "an official program for universal use in all the schools."[14] Bellamy recalled that the event "had to be more than a list of exercises. The ritual must be prepared with simplicity and dignity."[14] Edna Dean Proctor wrote an ode for the event, and "There was also an oration suitable for declamation."[14] Bellamy held that "Of course, the nub of the program was to be the raising of the flag, with a salute to the flag recited by the pupils in unison."[14] He found "There was not a satisfactory enough form for this salute. The Balch salute, which ran, "I give my heart and my hand to my country, one country, one language, one flag," seemed to him too juvenile and lacking in dignity."[14] After working on the idea with Upham, Bellamy concluded, "It was my thought that a vow of loyalty or allegiance to the flag should be the dominant idea. I especially stressed the word 'allegiance'. ...?Beginning with the new word allegiance, I first decided that 'pledge' was a better school word than 'vow' or 'swear'; and that the first person singular should be used, and that 'my' flag was preferable to 'the.'"[14] Bellamy considered the words "country, nation, or Republic," choosing the last as "it distinguished the form of government chosen by the founding fathers and established by the Revolution. The true reason for allegiance to the flag is the Republic for which it stands."[14] Bellamy then reflected on the sayings of Revolutionary and Civil War figures, and concluded "all that pictured struggle reduced itself to three words, one Nation indivisible."[14] Bellamy considered the slogan of the French Revolution, Libert, galit, fraternit ("liberty, equality, fraternity"), but held that "fraternity was too remote of realization, and [that] equality was a dubious word."[14] Concluding "Liberty and justice were surely basic, were undebatable, and were all that any one Nation could handle. If they were exercised for all. they involved the spirit of equality and fraternity."[14] After being reviewed by Upham and other members of The Youth's Companion, the Pledge was approved and put in the official Columbus Day program. Bellamy noted that, "In later years the words 'to my flag' were changed to 'to the flag of the United States of America' because of the large number of foreign children in the schools."[14] Bellamy disliked the change, as "it did injure the rhythmic balance of the original composition."[14] In 1906, The Daughters of the American Revolution's magazine, The American Monthly, listed the "formula of allegiance" as being the Balch Pledge of Allegiance, which reads:[13] I pledge allegiance to my flag, and the republic for which it stands. I pledge my head and my heart to God and my country. One country, one language and one flag. In subsequent publications of the Daughters of the American Revolution, such as in 1915's "Proceedings of the Twenty-Fourth Continental Congress of the Daughters of the American Revolution" and 1916's annual "National Report," the Balch Pledge, listed as official in 1906, is now categorized as "Old Pledge" with Bellamy's version under the heading "New Pledge."[20][21] However, the "Old Pledge" continued to be used by other organizations until the National Flag Conference established uniform flag procedures in 1923. In 1923, the National Flag Conference called for the words "my Flag" to be changed to "the Flag of the United States," so that new immigrants would not confuse loyalties between their birth countries and the US. The words "of America" were added a year later. Congress officially recognized the Pledge for the first time, in the following form, on June 22, 1942:[22] I pledge allegiance to the flag of the United States of America, and to the Republic for which it stands, one Nation indivisible, with liberty and justice for all. Louis Albert Bowman, an attorney from Illinois, was the first to suggest the addition of "under God" to the pledge. The National Society of the Daughters of the American Revolution gave him an Award of Merit as the originator of this idea.[23][24] He spent his adult life in the Chicago area and was chaplain of the Illinois Society of the Sons of the American Revolution. At a meeting on February 12, 1948,[23] he led the society in reciting the pledge with the two words "under God" added. He said that the words came from Lincoln's Gettysburg Address. Although not all manuscript versions of the Gettysburg Address contain the words "under God", all the reporters' transcripts of the speech as delivered do, as perhaps Lincoln may have deviated from his prepared text and inserted the phrase when he said "that the nation shall, under God, have a new birth of freedom." Bowman repeated his revised version of the Pledge at other meetings.[23] In 1951, the Knights of Columbus, the world's largest Catholic fraternal service organization, also began including the words "under God" in the Pledge of Allegiance.[25] In New York City, on April 30, 1951, the board of directors of the Knights of Columbus adopted a resolution to amend the text of their Pledge of Allegiance at the opening of each of the meetings of the 800 Fourth Degree Assemblies of the Knights of Columbus by addition of the words "under God" after the words "one nation." Over the next two years, the idea spread throughout Knights of Columbus organizations nationwide. On August 21, 1952, the Supreme Council of the Knights of Columbus at its annual meeting adopted a resolution urging that the change be made universal, and copies of this resolution were sent to the President, the Vice President (as Presiding Officer of the Senate), and the Speaker of the House of Representatives. The National Fraternal Congress meeting in Boston on September 24, 1952, adopted a similar resolution upon the recommendation of its president, Supreme Knight Luke E. Hart. Several State Fraternal Congresses acted likewise almost immediately thereafter. This campaign led to several official attempts to prompt Congress to adopt the Knights of Columbus policy for the entire nation. These attempts were eventually a success.[26] At the suggestion of a correspondent, Representative Louis C. Rabaut (D-Mich.), of Michigan sponsored a resolution to add the words "under God" to the Pledge in 1953.[27] Before February 1954, no endeavor to get the pledge officially amended had succeeded. The final successful push came from George MacPherson Docherty. Some American presidents honored Lincoln's birthday by attending services at the church Lincoln attended, New York Avenue Presbyterian Church by sitting in Lincoln's pew on the Sunday nearest February 12. On February 7, 1954, with President Eisenhower sitting in Lincoln's pew, the church's pastor, George MacPherson Docherty, delivered a sermon based on the Gettysburg Address entitled "A New Birth of Freedom." He argued that the nation's might lay not in arms but rather in its spirit and higher purpose. He noted that the Pledge's sentiments could be those of any nation: "There was something missing in the pledge, and that which was missing was the characteristic and definitive factor in the American way of life." He cited Lincoln's words "under God" as defining words that set the US apart from other nations.[citation needed] President Eisenhower had been baptized a Presbyterian very recently, just a year before. He responded enthusiastically to Docherty in a conversation following the service. Eisenhower acted on his suggestion the next day and on February 8, 1954, Rep. Charles Oakman (R-Mich.), introduced a bill to that effect. Congress passed the necessary legislation and Eisenhower signed the bill into law on Flag Day, June 14, 1954.[28] Eisenhower said: From this day forward, the millions of our school children will daily proclaim in every city and town, every village and rural school house, the dedication of our nation and our people to the Almighty.... In this way we are reaffirming the transcendence of religious faith in America's heritage and future; in this way we shall constantly strengthen those spiritual weapons which forever will be our country's most powerful resource, in peace or in war.[29] The phrase "under God" was incorporated into the Pledge of Allegiance on June 14, 1954, by a Joint Resolution of Congress amending ?4 of the Flag Code enacted in 1942.[28] On October 6, 1954, the National Executive Committee of the American Legion adopted a resolution, first approved by the Illinois American Legion Convention in August 1954, which formally recognized the Knights of Columbus for having initiated and brought forward the amendment to the Pledge of Allegiance.[26] Even though the movement behind inserting "under God" into the pledge might have been initiated by a private religious fraternity and even though references to God appear in previous versions of the pledge, author Kevin M. Kruse asserts that this movement was an effort by corporate America to instill in the minds of the people that capitalism and free enterprise were heavenly blessed. Kruse acknowledges the insertion of the phrase was influenced by the push-back against atheistic communism during the Cold War, but argues the longer arc of history shows the conflation of Christianity and capitalism as a challenge to the New Deal played the larger role.[30] Swearing of the Pledge is accompanied by a salute. An early version of the salute, adopted in 1887, known as the Balch Salute, which accompanied the Balch pledge, instructed students to stand with their right hand outstretched toward the flag, the fingers of which are then brought to the forehead, followed by being placed flat over the heart, and finally falling to the side. In 1892, Francis Bellamy created what was known as the Bellamy salute. It started with the hand outstretched toward the flag, palm down, and ended with the palm up. Because of the similarity between the Bellamy salute and the Nazi salute, which was adopted in Germany later, the US Congress stipulated that the hand-over-the-heart gesture as the salute to be rendered by civilians during the Pledge of Allegiance and the national anthem in the US would be the salute to replace the Bellamy salute. Removal of the Bellamy salute occurred on December 22, 1942, when Congress amended the Flag Code language first passed into law on June 22, 1942.[31] Attached to bills passed in Congress in 2008 and then in 2009 (Section 301(b)(1)of title 36, United States Code), language was included which authorized all active duty military personnel and all veterans in civilian clothes to render a proper hand salute during the raising and lowering of the flag, when the colors are presented, and during the National Anthem.[32] A musical setting for "The Pledge of Allegiance to the Flag" was created by Irving Caesar, at the suggestion of Congressman Louis C. Rabaut whose House Resolution 243 to add the phrase "under God" was signed into law on Flag Day, June 14, 1954.[33] The composer, Irving Caesar, wrote and published over 700 songs in his lifetime. Dedicated to social issues, he donated all rights of the musical setting to the U.S. government, so that anyone can perform the piece without owing royalties.[34][35] It was sung for the first time on the floor of the House of Representatives on Flag Day, June 14, 1955 by the official Air Force choral group the "Singing Sergeants". A July 29, 1955 House and Senate resolution authorized the U.S. Government Printing Office to print and distribute the song sheet together with a history of the pledge.[36] Other musical versions of the Pledge have since been copyrighted, including by Beck (2003), Lovrekovich (2002 and 2001), Roton (1991), Fijol (1986), and Girardet (1983).[37] In 1940, the Supreme Court, in Minersville School District v. Gobitis, ruled that students in public schools, including the respondents in that caseJehovah's Witnesses who considered the flag salute to be idolatrycould be compelled to swear the Pledge. In 1943, in West Virginia State Board of Education v. Barnette, the Supreme Court reversed its decision. Justice Robert H. Jackson, writing for the 6 to 3 majority, went beyond simply ruling in the precise matter presented by the case to say that public school students are not required to say the Pledge on narrow grounds, and asserted that such ideological dogmata are antithetical to the principles of the country, concluding with: If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. If there are any circumstances which permit an exception, they do not now occur to us.[38] In a later case, the 11th Circuit Court of Appeals held that students are also not required to stand for the Pledge.[39] Requiring or promoting of the Pledge on the part of the government has continued to draw criticism and legal challenges on several grounds. One objection is that a democratic republic built on freedom of dissent should not require its citizens to pledge allegiance to it, and that the First Amendment to the United States Constitution protects the right to refrain from speaking or standing, which itself is also a form of speech in the context of the ritual of pledging allegiance.[40][39] Another objection is that the people who are most likely to recite the Pledge every day, small children in schools, cannot really give their consent or even completely understand the Pledge they are making.[41] Another criticism is that a government requiring or promoting the phrase "under God" violates protections against the establishment of religion guaranteed in the Establishment Clause of the First Amendment.[42] In 2004, linguist Geoffrey Nunberg said the original supporters of the addition thought that they were simply quoting Lincoln's Gettysburg Address, but to Lincoln and his contemporaries, "under God" meant "God willing", so they would have found its use in the Pledge of Allegiance grammatically incorrect and semantically odd.[43][44] Prominent legal challenges were brought in the 1930s and 1940s by Jehovah's Witnesses, a denomination whose beliefs preclude swearing loyalty to any power other than God, and who objected to policies in public schools requiring students to swear an oath to the flag.[45] They said requiring the pledge violated their freedom of religion guaranteed by the Free Exercise Clause of the First Amendment. The first case was in 1935, when two children, Lillian and William Gobitas, ages ten and twelve, were expelled from the Minersville, Pennsylvania, public schools that year for failing to salute the flag and recite the Pledge of Allegiance.[46][47] In a 2002 case brought by atheist Michael Newdow, whose daughter was being taught the Pledge in school, the Ninth Circuit Court of Appeals ruled the phrase "under God" an unconstitutional endorsement of monotheism when the Pledge was promoted in public school. In 2004, the Supreme Court heard Elk Grove Unified School District v. Newdow, an appeal of the ruling, and rejected Newdow's claim on the grounds that he was not the custodial parent, and therefore lacked standing, thus avoiding ruling on the merits of whether the phrase was constitutional in a school-sponsored recitation. On January 3, 2005, a new suit was filed in the U.S. District Court for the Eastern District of California on behalf of three unnamed families. On September 14, 2005, District Court Judge Lawrence Karlton ruled in their favor. Citing the precedent of the 2002 ruling by the Ninth Circuit Court of Appeals, Judge Karlton issued an order stating that, upon proper motion, he would enjoin the school district defendants from continuing their practices of leading children in pledging allegiance to "one Nation under God."[48] A 2005 Bill, H.R. 2389, to prohibit the Supreme Court's and most federal courts from considering any legal challenges to the government's requiring or promoting of the Pledge of Allegiance, died in the Senate after having passed in the House. This action is viewed[by whom?] in general as court stripping by Congress of the constitutional power of the judiciary. Even if a similar bill is enacted, its practical effect may not be clear:[opinion] proponents of the bill have said that it is a valid exercise of Congress's power to regulate the jurisdiction of the federal courts under Article III, Section 2 of the Constitution, but opponents say Congress does not have the authority to prevent the Supreme Court from hearing claims based on the Bill of Rights, since this can only be done through Constitutional Amendment. Judges and legal analysts have said that if Congress can remove from the judicial branch the ability to determine if legislation is constitutional, the US separation of powers would be disturbed, or rendered non-functional.[49] Mark J. Pelavin, former Associate Director of the Religious Action Center of Reform Judaism, said of court stripping in regard to the Pledge of Allegiance that:[50][51] Today's House adoption of the so-called "Pledge Protection Act" is a shameful effort to strip our federal courts of their ability to uphold the rights of all Americans. By removing the jurisdiction of federal courts, including the Supreme Court, from cases involving the Pledge, this legislation sets a dangerous precedent: threatening religious liberty, compromising the vital system of checks and balances upon which our government was founded, and granting Congress the authority to strip the courts' jurisdiction on any issue it wishes. Today, the issue was the Pledge of Allegiance, but tomorrow it could be reproductive rights, civil rights, or any other fundamental concern. In 2006, in the Florida case Frazier v. Alexandre, a federal district court in Florida ruled that a 1942 state law requiring students to stand and recite the Pledge of Allegiance violates the First and Fourteenth Amendments of the U.S. Constitution.[52] As a result of that decision, a Florida school district was ordered to pay $32,500 to a student who chose not to say the pledge and was ridiculed and called "unpatriotic" by a teacher.[53] In 2009, a Montgomery County, Maryland, teacher berated and had school police remove a 13-year-old girl who refused to say the Pledge of Allegiance in the classroom. The student's mother, assisted by the American Civil Liberties Union of Maryland, sought and received an apology from the teacher, as state law and the school's student handbook both prohibit students from being forced to recite the Pledge.[54] On March 11, 2010, the Ninth Circuit Court of Appeals upheld the words "under God" in the Pledge of Allegiance in the case of Newdow v. Rio Linda Union School District.[55][56] In a 2ÿ1 decision, the appellate court ruled that the words were of a "ceremonial and patriotic nature" and did not constitute an establishment of religion.[55] Judge Stephen Reinhardt dissented, writing that "the state-directed, teacher-led daily recitation in public schools of the amended 'under God' version of the Pledge of Allegiance... violates the Establishment Clause of the Constitution."[57] On November 12, 2010, in a unanimous decision, the United States Court of Appeals for the First Circuit in Boston affirmed a ruling by a New Hampshire lower federal court which found that the pledge's reference to God does not violate non-pledging students' rights if student participation in the pledge is voluntary.[58][59] A United States Supreme Court appeal of this decision was denied on June 13, 2011.[60][61] In September 2013, a case was brought before the Massachusetts Supreme Judicial Court, arguing that the pledge violates the Equal Rights Amendment of the Constitution of Massachusetts.[62] In May 2014, Massachusetts' highest court ruled that the pledge does not discriminate against atheists, saying that the words "under God" represent a patriotic, not a religious, exercise.[63] In February 2015 New Jersey Superior Court Judge David F. Bauman dismissed a lawsuit, ruling that " the Pledge of Allegiance does not violate the rights of those who don't believe in God and does not have to be removed from the patriotic message."[64] The case against the Matawan-Aberdeen Regional School District had been brought by a student of the district and the American Humanist Association that argued that the phrase "under God" in the pledge created a climate of discrimination because it promoted religion, making non-believers "second-class citizens." In a twenty-one page decision, Bauman wrote, "Under [the association members'] reasoning, the very constitution under which [the members] seek redress for perceived atheistic marginalization could itself be deemed unconstitutional, an absurd proposition which [association members] do not and cannot advance here."[64] Bauman said the student could skip the pledge, but upheld a New Jersey law that says pupils must recite the pledge unless they have "conscientious scruples" that do not allow it.[65][66] He noted, "As a matter of historical tradition, the words 'under God' can no more be expunged from the national consciousness than the words 'In God We Trust' from every coin in the land, than the words 'so help me God' from every presidential oath since 1789, or than the prayer that has opened every congressional session of legislative business since 1787.
Where was the first world classical tamil conference held?
Tamil Nadu🚨The World Tamil Conference (Tamil: ????? ????? ??????) is a series of occasional conferences to discuss the social growth of the Tamil language. Each conference is attended by thousands of Tamil enthusiasts around the world. Conferences are hosted in various cities in India, as well as world cities with a significant Tamil population. The conference aims in promoting the heritage of Tamil language. A similar conference called World Classical Tamil Conference 2010, unapproved by the International Association for Tamil Research, was held in Tamil Nadu conducted by the Dravida Munnetra Kazhagam under the leadership of M. Karunanidhi. Not all agreed with the academic and intellectual rigour of the latter event. Despite these criticisms upholding such a huge event portraying the value of Tamil language and culture is being appreciated vastly and credited to the DMK supremo as commonly believed by the people in the state of Tamil Nadu. [1][2][3] The next conference will be held on July 3-4, 2019 in Chicago. The 10th conference will be jointly hosted by Federation of Tamil Sangams in North America (FeTNA) and Chicago Tamil Sangam (CTS). [4]
What was the population of india in 1950?
361,088,090🚨 The 1951 Census of India was the 9th in a series of censuses held in India every decade since 1871.[1] It is also the first census after independence and Partition of India.[2] 1951 census was also the first census to be conducted under 1948 Census of India Act. The population of India was counted as 361,088,090 (1:0.946 male:female)[3] Total population increased by 42,427,510, 13.31% more than the 318,660,580 people counted during the 1941 census.[4] No census was done for Jammu and Kashmir in 1951 and its figures were interpolated from 1941 and 1961 state census.[5] National Register of Citizens of India (NRC) was prepared soon after the census.[6][7] In 1951, at the time of the first population Census, just 18% of Indians were literate while life expectancy was 32 years.[8] Based on 1951 census of displaced persons, 7,226,000 Muslims went to Pakistan (both West and East) from India while 7,249,000 Hindus and Sikhs moved to India from Pakistan(both West and East).[9] Separate figures for Hindi, Urdu, and Punjabi were not issued, due to the fact the returns were intentionally recorded incorrect in states such as East Punjab, Himachal Pradesh, Delhi, PEPSU, and Bilaspur.[10] Hindus comprises 306 million(84.1%)[11] and Muslims were 34 million(9.8%)[12] in 1951 census.[13][14][15][16][17][18] 1951 Indian census showed that there were 8.3 million Christians.[13] Hindus comprised about 66 per cent of the population of India before partition.
Who began the first modern industrial research laboratory?
General Electric🚨General Electric Research Laboratory was the first industrial research facility in the United States. Established in 1900, the lab was home to the early technological breakthroughs of General Electric and created a research and development environment that set the standard for industrial innovation for years to come.[3] It developed into GE Global Research that now covers an array of technological research, ranging from healthcare to transportation systems, at multiple locations throughout the world. Its campus in Schenectady, New York was designated a National Historic Landmark in 1975.[2][4] Founded in 1900 by Thomas Edison, Willis R. Whitney, and Charles Steinmetz, this lab defined industrial research for years to come. Elihu Thomson, one of the founding members of the laboratory, summed up the goal of the lab saying, "It does seem to me therefore that a Company as large as the General Electric Company, should not fail to continue investing and developing in new fields: there should, in fact, be a research laboratory for commercial applications of new principles, and even for the discovery of those principles."[3] Furthermore, Edwin W. Rice, founding vice president, said they wanted to "establish a laboratory to be devoted exclusively to original research. It is hoped by this means that many profitable fields may be discovered."[5] Whitney and the founders of the research lab took many of their lab ideals from a German university model. German universities allowed professors to research and experiment with their own interests to seek further knowledge without having commercial or economic interests in mind. Other German scientists also researched exclusively with business in mind. But, these two views contributed to a successful relationship between science and industry. It was this success that influenced Whitney in his vision for the GE Research Lab.[6] The laboratory began at a time when the American electrification process was in its infant stage. General Electric became the leader of this move toward electrifying the United States and developing new technologies for many other science and technology fields. Willis Whitney and his assistant, Thomas Dempster, were the key researchers in developing the electrical technology that allowed the laboratory to continue to grow.[3] The lab grew from 8 people to 102 people by 1906, which included scientifically trained researchers that made up 40% of the staff. Whitney believed in exploratory scientific research, with the goal of creating new commercial products. These two goals appealed to General Electric. For researchers, the lab provided time and money for experimentation, research, and personal interests without putting a high demand on developing theories or teaching.[7] Nearly 30 years after its founding, the laboratory had expanded the staff to more than 400 chemists, physicists, and electrical engineers, plus their assistants.[5] It took several years for the lab to follow through with the vision to create all original innovations, instead of improving on the inventions already in place. GE's earliest project was perfecting the incandescent light bulb. In 1908, engineer and new head researcher William Coolidge invented the ductile tungsten light bulb filament, providing a more durable and long-lasting light filament than the existing technology. "The invention secured GE's technological leadership in the market and epitomized the role of the GE research lab bringing innovation to the marketplace."[3] But, that work was still an improvement on existing technology and nothing entirely new. In the coming years, GE scientists earned two Nobel Prizes in chemistry and physics. In 1932, Irving Langmuir won the Nobel Prize in chemistry for his work on surface chemical reactions which helped him develop the gas-filled light bulb in 1916. After patenting many inventions, Langmuir developed his new light bulb which reinvented lights altogether. By 1928, due to Langmuir's innovation, GE held 96% of incandescent light sales in America. That entirely new invention set GE on a path to follow through with Whitney and Rice's vision for the lab.[7] Starting with the success of the incandescent and gas-filled light bulbs, General Electric expanded its research to a range of technological and scientific fields. It strove for commercial goals in any innovation they achieved. Throughout its history, the General Electric Research Laboratory has earned thousands of patents for innovative technology, redefining industries and commercial products.[3] In 1999, the laboratory became GE Global Research after opening a research center in Bangalore, India. GE now has research laboratories in New York, Oklahoma, India, China, Germany, and Brazil, all of which make up GE's push for global innovation. GE has expanded its research beyond lighting to appliances, aviation, electrical distribution, energy, healthcare, media & entertainment, oil & gas, transportation, and water, along with numerous other fields.[8] They employ 3,000 employees and continue to bring innovation and technology to the world, the same goal of General Electric that was first proposed by Whitney and Steinmetz.[3]
What is the biggest festival in the uk?
Glastonbury🚨There are a large number of music festivals in the United Kingdom, covering a wide variety of genres. Some of the U.K.'s music festivals are world-renowned and have been held for many years including the world's largest greenfield festival Glastonbury, which has been held since the 1970s. Large-scale modern music festivals began in the 1960s with festivals such as the Isle of Wight Festival and following the success of Woodstock in the United States[citation needed] and free festivals. Some began as jazz festivals - Reading Festival began as the National Jazz and Blues Festival in the 1960s and the first Glastonbury Festival was the 1970 Pilton Pop, Blues & Folk Festival. In the 21st century the number of festivals has grown significantly,[1] particularly with the emergence of smaller-scale "boutique" festivals. However, in 2011 and 2012, several festivals were cancelled at short notice - some due to weather conditions and some due to poor sales - prompting fears that the festival market is saturated.[2] Some festivals do not appear in the same location every year, but in different places around the country, or even in other countries. Notable festivals to appear in the UK include:
Where is phi phi island located in thailand?
between the large island of Phuket and the west Strait of Malacca coast of the mainland🚨The Phi Phi Islands (Thai: ????????????, RTGS:?Mu Ko Phiphi, pronounced?[m? k??? p?ؐ?.p?ؐ?]) are an island group in Thailand, between the large island of Phuket and the west Strait of Malacca coast of the mainland. The islands are administratively part of Krabi province. Ko Phi Phi Don ("ko" (Thai: ????) meaning "island" in the Thai language) is the largest island of the group, and is the most populated island of the group, although the beaches of the second largest island, Ko Phi Phi Lee (or "Ko Phi Phi Leh"), are visited by many people as well. The rest of the islands in the group, including Bida Nok, Bida Noi, and Bamboo Island (Ko Mai Phai), are not much more than large limestone rocks jutting out of the sea. The Islands are reachable by speedboats or Long-tail boats most often from Krabi Town or from various piers in Phuket Province. Phi Phi Don was initially populated by Muslim fishermen during the late-1940s, and later became a coconut plantation. The Thai population of Phi Phi Don remains more than 80% Muslim. The actual population however, if counting laborers, especially from the north-east, is much more Buddhist these days. The population is between 2,000 and 3,000 people (2013). The islands came to worldwide prominence when Ko Phi Phi Leh was used as a location for the 2000 British-American film The Beach. This attracted criticism, with claims that the film company had damaged the island's environment, since the producers bulldozed beach areas and planted palm trees to make it resemble description in the book,[1] an accusation the film's makers contest. An increase in tourism was attributed to the film's release, which resulted in increases in waste on the Islands, and more developments in and around the Phi Phi Don Village. Phi Phi Lee also houses the "Viking Cave", where there is a thriving industry harvesting edible bird's nest. Ko Phi Phi was devastated by the Indian Ocean tsunami of December 2004, when nearly all of the island's infrastructure was destroyed. As of 2010[update] most, but not all, of this has been restored.[citation needed] From archaeological discoveries, it is believed that the area was one of the oldest communities in Thailand, dating back to the prehistoric period. It is believed that this province may have taken its name from Krabi, which means "sword". This may come from a legend that an ancient sword was unearthed prior to the citys founding. The name "Phi Phi" (pronounced as "pee-pee") originates from Malay. The original name for the islands was Pulau Api-Api ("the fiery isle"). The name refers to the Pokok Api-Api, or "fiery tree" (grey mangrove) which is found throughout the island. There are six islands in the group known as Phi Phi. They lie 40 kilometres (25 miles) south-east of Phuket and are part of Hat Nopparat Thara-Ko Phi Phi National Park[2] which is home to an abundance of corals and marine life. There are limestone mountains with cliffs, caves, and long white sandy beaches.[3] The national park covers a total area of 242,437 rai (38,790 ha). Phi Phi Don and Phi Phi Lee are the largest and most well-known islands. Phi Phi Don is 9.73 square kilometres (3.76 square miles): 8 kilometres (5.0 miles) in length and 3.5 kilometres (2.2 miles) wide. Phi Phi Lee is 2 kilometres (1.2 miles). In total, the islands occupy 12.25 square kilometres (4.73 square miles). There are two administrative villages on Ko Phi Phi under the administration of Ao Nang sub-district, Mueang district, Krabi Province. There are nine settlements under these two villages. The villages are: Hat Noppharat Thara - Mu Ko Phi Phi National Park is influenced by tropical monsoon winds. There are two seasons: the rainy season from May till December and the hot season from January till April. Average temperature ranges between 17ÿ37?C (63ÿ99?F). Average rainfall per year is about 2,231 millimetres (87.8 inches), with wettest month being July and the driest February.[2] Since the re-building of Ko Phi Phi after the 2004 tsunami, paved roads now cover the vast majority of Ton Sai Bay and Loh Dalum Bay. All roads are for pedestrian use only with push carts used to transport goods and bags. The only permitted motor vehicles are reserved for emergency services.[citation needed] Bicycling is the most popular form of transport in Ton Sai. Bicycles have been banned on the island except for children.[citation needed] The nearest airports are at Krabi, Trang, and Phuket. All three have direct road and boat connections.[citation needed] There are frequent ferry boats to Ko Phi Phi from Phuket, Ko Lanta, and Krabi town starting at 08:30. Last boats from Krabi and Phuket depart at 14:30. In the "green season" (Jun-Oct), travel to and from Ko Lanta is via Krabi town only.[citation needed] There is a large modern deep water government pier on Tonsai Bay, Phi Phi Don Village, completed in late 2009. It takes in the main ferry boats from Phuket, Krabi, and Ko Lanta. Visitors to Phi Phi Island must pay 20 baht on arrival at the pier. Dive boats, longtail boats, and supply boats have their own drop off points along the piers, making the pier highly efficient in the peak season. The islands feature beaches and clear water that have had their natural beauty protected by national park status. Tourism on Ko Phi Phi, like the rest of Krabi Province, has exploded since the filming of the movie The Beach. In the early 1990s, only adventurous travelers visited the island. Today, Ko Phi Phi is one of Thailand's most famous destinations for scuba diving and snorkeling, kayaking and other marine recreational activities. The number of tourists visiting the island every year is so high that Ko Phi Phi's coral reefs and marine fauna have suffered major damage as a result. There are no hotels or other type of accommodation on the smaller island Ko Phi Phi Lee . The only opportunity to spend the night on this island is to take a guided tour to Maya Bay and sleep in a tent.[4] There is a small hospital on Phi Phi Island for emergencies. Its main purpose is to stabilize emergencies and evacuate to a Phuket hospital. It is between the Phi Phi Cabana Hotel and the Ton Sai Towers, about a 5ÿ7 minute walk from the main pier. On 26 December 2004, much of the inhabited part of Phi Phi Don was devastated by the Indian Ocean tsunami. The island's main village, Ton Sai (Banyan Tree, Thai: ??????), is built on a sandy isthmus between the island's two long, tall limestone ridges. On both sides of Ton Sai are semicircular bays lined with beaches. The isthmus rises less than two metres (6.6 feet) above sea level. Shortly after 10:00 on 26 December, the water from both bays receded. When the tsunami hit, at 10:37, it did so from both bays, and met in the middle of the isthmus. The wave that came into Ton Sai Bay was three metres (9.8 feet) high. The wave that came into Loh Dalum Bay was 6.5 metres (21 feet) high. The force of the larger wave from Loh Dalum Bay pushed the tsunami and also breached low-lying areas in the limestone karsts, passing from Laa Naa Bay to Bakhao Bay, and at Laem Thong (Sea Gypsy Village), where 11 people died. Apart from these breaches, the east side of the island experienced only flooding and strong currents. A tsunami memorial was built to honor the deceased but has since been removed for the building of a new hotel in 2015. At the time of the tsunami, the island had an estimated 10,000 occupants, including tourists. After the tsunami, approximately 70% of the buildings on the island had been destroyed. By the end of July 2005, an estimated 850 bodies had been recovered, and an estimated 1,200 people were still missing. The total number of fatalities is unlikely to be known. Local tour guides cite the figure 4,000. Of Phi Phi Don residents, 104 surviving children had lost one or both parents. In the immediate aftermath of the disaster, the island was evacuated. The Thai government declared the island temporarily closed while a new zoning policy was drawn up. Many transient Thai workers returned to their home towns, and former permanent residents were housed in a refugee camp at Nong Kok in Krabi Province. On 6 January 2005, a former Dutch resident of Phi Phi, Emiel Kok, set up a voluntary organization, Help International Phi Phi ("HI Phi Phi"). HI Phi Phi recruited 68 Thai staff from the refugee camp, as well as transient backpacker volunteers (of whom more than 3,500 offered their assistance), and returned to the island to undertake clearing and rebuilding work. On 18 February 2005, a second organization, Phi Phi Dive Camp,[5] was set up to remove the debris from the bays and coral reef, most of which was in Ton Sai Bay. By the end of July 2005, 23,000 tonnes of debris had been removed from the island, of which 7,000 tonnes had been cleared by hand. "We try and do as much as possible by hand," said Kok, "that way we can search for passports and identification." The majority of buildings that were deemed fit for repair by government surveyors had been repaired, and 300 businesses had been restored. HI Phi Phi was nominated for a Time Magazine Heroes of Asia award.[6] As of 6 December 2005, nearly 1,500 hotel rooms were open, and a tsunami early-warning alarm system had been installed by the Thai government with the help of volunteers. Since the tsunami, Phi Phi has come under greater threat from mass tourism. Dr Thon Thamrongnawasawat, an environmental activist and member of Thailand's National Reform Council, is campaigning to have Phi Phi tourist numbers capped before its natural beauty is completely destroyed. With southern Thailand attracting thousands more tourists every day, Dr Thon makes the point that the ecosystem is under threat and is fast disappearing. "Economically, a few people may be enriched, but their selfishness will come at great cost to Thailand", says Dr Thon, a marine biology lecturer at Kasetsart University and an established environmental writer.[7] More than one thousand tourists arrive on Phi Phi daily. This figure does not include those who arrive by chartered speedboat or yacht. Phi Phi produces about 25 tonnes of solid waste a day, rising to 40 tonnes during the high season. All tourists arriving on the island pay a 20-baht fee at Ton Sai Pier to assist in "keeping Koh Phi Phi clean". "We collect up to 20,000 baht a day from tourists at the pier. The money is then used to pay a private company to haul the rubbish from the island to the mainland in Krabi to be disposed of", Mr Pankum Kittithonkun, Ao Nang Administration Organization (OrBorTor) President, said. The boat takes about 25 tonnes of trash from the island daily, weather permitting. Ao Nang OrBorTor pays 600,000 baht per month for the service. During the high season, an Ao Nang OrBorTor boat is used to help transport the overflow of rubbish. Further aggravating Phi Phi's waste issues is sewage. "We have no wastewater management plant there. Our only hope is that hotels, restaurants and other businesses act responsibly ÿ but I have no faith in them," Mr Pankum said. "They of course have to treat their own wastewater before releasing it into the sea, but they very well could just be turning the devices on before officers arrive to check them." The fundamental issue is that the budget allocated for Ao Nang and Phi Phi is based on its registered population, not on the number of people it plays host to every year, Mr Pankum said.[8] "Sea gypsy" boats, Ko Phi Phi one of Phi Phi islands beach Longtail boat on the shore of Phi Phi Island Longtail boats, Maya Beach Bryde's whale swims off the islands
When did rocko's modern life first air?
September 18, 1993🚨 Rocko's Modern Life is an American animated sitcom created by Joe Murray for Nickelodeon. The series centers on the surreal life of an anthropomorphic Australian-immigrant wallaby named Rocko as well as his friends: the gluttonous steer Heffer, the neurotic turtle Filburt, and Rocko's faithful dog Spunky. It is set in the fictional town of O-Town. Throughout its run, the show was controversial for its adult humor, including double entendre, innuendo, and satirical social commentary, similar to The Ren & Stimpy Show. Murray created the title character for an unpublished comic book series in the late 1980s, and later reluctantly pitched the series to Nickelodeon, who were looking for edgier cartoonists for their new Nicktoons. The network gave the staff a large amount of creative freedom, with the writers targeting both children and adults. The show premiered on September 18, 1993 and ended on November 24, 1996, totaling four seasons and 52 episodes. A TV special was announced in August 2016, and is set to premiere in 2018.[1] The show is notable for launching the careers of voice actors, including Tom Kenny and Carlos Alazraqui. After the show's cancellation, much of the staff regrouped to work on SpongeBob SquarePants, created by Rocko's creative director Stephen Hillenburg. Rocko's Modern Life follows the life of a timid Australian immigrant wallaby named Rocko (voiced by Carlos Alazraqui), who encounters various dilemmas and situations regarding otherwise mundane aspects of life. His best friends are: Heffer Wolfe (Tom Kenny), a fat and enthusiastic steer; Filburt (Mr. Lawrence), a neurotic turtle who often feels uncomfortable or disturbed; and his faithful dog Spunky (Alazraqui). Living next door to Rocko are a middle-aged couple, Ed Bighead (Charlie Adler), a cynical and crass toad who despises Rocko; and his compassionate and more friendly wife Bev (Adler). All of the characters in Rocko's Modern Life are anthropomorphic animals of varying species, the vast majority of whom are mentally unstable. Murray said that he matched personalities of his characters to the various animals in the series to form a social caricature.[2] Rocko's Modern Life is set in a generic fictional American town called O-Town, located near the Great Lakes. Places in the town include: Chokey Chicken (later renamed "Chewy Chicken" due to the former name referring to masturbation), a parody of KFC and a favorite restaurant/hang-out place for Rocko, Heffer, and Filburt; Conglom-O Corporation, a megacorporation with the slogan "We own you" that owns everything in town; Heck, a place of eternal torment run by Peaches where "bad people" go when they die; Holl-o-Wood, a town that resembles Hollywood; and Kind of a Lot O' Comics, a comic book store owned by a cruel toad named Mr. Smitty, where Rocko works. Many of the locations in Rocko's Modern Life have the letter "O" in them; for example, O-Town and Conglom-O Corporation. When asked about the use of "O" in his show, Murray said: I always got a big kick out of the businesses that were 'House-O-Paint', or 'Ton-O-Noodles', because their names seemed to homogenize what they sold, and strip the products of true individuality and stress volume ... and we all know, the American dream is volume! So what better company to create volume than 'Conglom-O', and since a majority of the town worked at Conglom-O, it should be called 'O' Town. I also wanted the town to be 'anytown' USA, and I used to love sports players with a big ZERO on their back. It was funny to me.[3] Originally, the character appeared in an unpublished comic book titled Travis. Murray tried selling the comic book in the late 1980s, between illustrating jobs, and did not find success in getting it into production. Many other characters appeared in various sketchbooks. He described the early 1990s animation atmosphere as "ripe for this kind of project. We took some chances that would be hard to do in these current times (the 1990s)".[4] Murray wanted funding for his independent film My Dog Zero, so he wanted Nickelodeon to pre-buy television rights for the series. He presented a pencil test to Nickelodeon, which afterward became interested in buying and financing the show. Murray had never worked in television before.[5] The industry was coming out of a "rough period" and Murray wanted to "shake things up a bit".[6] Linda Simensky, then in charge of animation development in Nickelodeon, described the Nicktoons lineup and concept to Murray. He originally felt skepticism towards the concept of creating a Nicktoon as he disliked television cartoons. Simensky told him that Nicktoons differed from other cartoons. He then told her that he believed that My Dog Zero would not work as a cartoon. He then researched Nickelodeon at the library and found that Nickelodeon's "attitude was different than regular TV".[3] The cable network providers were "making their own rules": for example, Murray stated that he "didn't write for children", which the executives were fine with.[7] Murray was unsure at first, but was inspired by independent animation around him, such as Animation Celebration and MTV's Liquid Television, and gave the network a shot.[7] At the time, Nickelodeon was selling itself as a network based as much around edge as around kids' entertainment. It aimed to appeal to college students and parents as much as children.[8] Murray developed the Rocko character after visiting a zoo in the Bay Area and coming across a wallaby that seemed to be oblivious to the chaos around him.[6] Murray combed through his sketchbooks, developed the Rocko's Modern Life concept, and submitted it to Nickelodeon, believing that the concept would likely be rejected. Murray felt they would not like the pilot, and he would just collect his sum and begin funding his next independent film.[7] According to Murray, around three or four months later he had "forgotten about" the concept and was working on My Dog Zero when Simensky informed him that Nickelodeon wanted a pilot episode. Murray said that he was glad that he would get funding for My Dog Zero.[3] On his website he describes My Dog Zero as "that film that Linda Simensky saw which led me to Rocko."[9] "Sucker for the Suck-O-Matic" was originally written as the pilot; the executives decided that Heffer Wolfe, one of the characters, would be "a little too weird for test audiences". Murray, instead of removing Heffer from "Sucker for the Suck-O-Matic", decided to write "Trash-O-Madness" as the pilot episode.[3] In the original series pilot, Rocko was colored yellow. His color was changed when a toy merchandising company informed Nick they were interested in marketing toys but did not want to market Rocko because "they already had a yellow character". Murray changed Rocko's color to beige, but after the pilot aired, the company opted out of producing toys for the series, so the color change was pointless. When the series was in development prior to the release of the first episode, the series had the title The Rocko Show.[10] In November 1992, two months prior to the production of season 1 of Rocko's Modern Life, Murray's first wife committed suicide.[11] Murray had often blamed his wife's suicide on the show being picked up. He said "It was always an awful connection because I look at Rocko as such a positive in my life."[12] Murray felt that he had emotional and physical "unresolved issues" when he moved to Los Angeles. He describes the experience as like participating in "marathon with my pants around my ankles". Murray initially believed that he would create one season, move back to the San Francisco Bay Area, and "clean up the loose ends I had left hanging". Murray said that he felt surprised when Nickelodeon approved new seasons;[3] Nickelodeon renewed the series for its second season in December 1993.[13] After season 3, he decided to hand the project to Stephen Hillenburg, who performed most work for season 4; Murray continued to manage the cartoon.[3] He said that he would completely leave the production after season 4. He said also that he encouraged the network to continue production, but Nickelodeon eventually decided to cancel the series. He described all fifty-two episodes as "top notch", and in his view the quality of a television show may decline as production continues "when you are dealing with volume".[3] On his website he said that, "In some ways it succeeded and in some ways failed. All I know it developed its own flavor and an equally original legion of fans."[4] In a 1997 interview Murray said that he at times wondered if he could restart the series; he feels the task would be difficult.[3] Series creator Joe Murray in 2011, on being a part of the creative animation scene in the early 1990s[6] The show was jointly produced between Games Animation and Joe Murray Productions. Since Nickelodeon did not have an animation studio, it had to contract out to other studios. After incidents with The Ren & Stimpy Show creator John Kricfalusi, Nickelodeon began not to trust its creators as much and began to form its own studio, Games Animation.[7] However, Murray recalls that they were still able to get a lot done independently. Murray has likened the independence to that of "Termite Terrace" (Warner Bros. Cartoons) from the 1930s. As Nickelodeon began to have more and more success with its animation cartoons, Murray said the "Termite Terrace" mentality was not working as much.[7] Producer Mary Harrington made the move from New York City to Los Angeles to set up Games Animation, in order to produce Rocko's Modern Life. The crew first began production on the show in January 1993.[5] Rocko's Modern Life was Nickelodeon's first in-house animated production.[5] Murray's Joe Murray Productions and Games Animation rented office space on Ventura Boulevard in the Studio City neighborhood of the San Fernando Valley region of Los Angeles, California.[14] The production moved to a different office building on Vineland Avenue in Studio City. Executives did not share space with the creative team.[15][16] Murray rented a floor in the Writers Guild of America, West building, although the team of Rocko was not a part of the union, which the staff found ironic.[7] Sunwoo Entertainment, and later Rough Draft Studios, assembled the animation.[17] According to Murray, as Rocko's Modern Life was his first television series, he did not know about the atmosphere of typical animation studios. Murray said that he opted to operate his studio in a similar manner to the operation of his Saratoga, California studio, which he describes as "very relaxed".[3] His cadre included many veterans who, according to him, described the experience as "the most fun they had ever had!" He, saying that the atmosphere was "not my doing", credited his team members for collectively contributing.[3] Murray described the daily atmosphere at the studio as "very loose", adding that the rules permitted all staff members to use the paging system to make announcements. He stated that one visitor compared the environment of the production studio to "preschool without supervision".[15][16] Murray stated that 70 people in the United States and over 200 people in South Korea and Japan animated the series.[3] Rick Bentley of the Ventura County Star said that it was unusual for a cartoon creator to select a wallaby as a main character. Bentley also stated that the Rocko universe was influenced by "everything from Looney Tunes to underground comics".[18] The staff of the show were fans of outrageous comedy, both animated and not animated. Tom Kenny cited Looney Tunes and SCTV as influences for the show, and also stated "I'm sure if you asked Joe Murray or Mr. Lawrence or any of those guys, especially in terms of animation, the weirdest cartoons would of course be our favoritesthose weird '30s Fleischer brothers Betty Boop cartoons and stuff like that."[19] Murray produced the pilot episode, "Trash-O-Madness", at his studio in Saratoga; he animated half of the episode, and the production occurred entirely in the United States, with animation in Saratoga and processing in San Francisco.[20] While directing during recording sessions, Murray preferred to be on the stage with the actors instead of "behind glass" in a control room, which he describes as "the norm" while making animated series.[21] He believes that, due to his lack of experience with children, Rocko's Modern Life "skewed kind of older".[2] Murray noted, "There's a lot of big kids out there. People went to see 'Roger Rabbit' and saw all these characters they'd grown up with and said, 'Yeah, why don't they have something like that anymore?'"[22] When he began producing Rocko, he says that his experience in independent films initially led him to attempt to micromanage many details in the production. He said that the approach, when used for production of television shows, was "driving me crazy". This led him to allow for other team members to manage aspects of the Rocko's Modern Life production.[2] Director and later creative director Stephen Hillenburg met Murray at an animation film festival where he was showing his three short films. Murray hired Hillenburg as a director on the series, making Hillenburg's first job in the animation business as a director.[23] Murray designed the logo of the series. He said that, after his design drifted from the original design, Nickelodeon informed Murray of how it intended the logo to look like. Murray also designed the covers of the comic book, the VHS releases, and the DVD releases.[24] The writers aimed to create stories that they describe as "strong" and "funny". The writers, including George Maestri and Martin Olson, often presented ideas to Murray while eating hamburgers at Rocky's, a restaurant formerly located on Lankershim in the North Hollywood section of the San Fernando Valley. He took his team members on "writing trips" to places such as Rocky's, the La Brea Tar Pits, and the wilderness. If he liked the story premises, the writers produced full outlines from the premises. Outlines approved by both him and Nickelodeon became Rocko's Modern Life episodes. Maestri describes some stories as originating from "real life" and some originating from "thin air".[25][26] Murray stated that each episode of Rocko's Modern Life stemmed from the personal experiences of himself and/or one or more of the directors or writers.[3] He said that he did not intend to use formulaic writing seen in other cartoons; he desired content that "broke new ground" and "did things that rode the edge", and that could be described as "unexpected". He did not hire writers who had previous experience with writing cartoons, instead hiring writers who worked outside of animation, including improv actors and comic artists. He said that story concepts that "ever smacked close to some formula idea that we had all seen before" received immediate rejection.[27] Jeff "Swampy" Marsh, a storyboard writer, says that writers of Rocko's Modern Life targeted children and adults. He cites Rocky and Bullwinkle as an example of another series that contains references indecipherable by children and understood by adults. Aiming for a similar goal, Marsh described the process as "a hard job". According to him, when censors questioned proposed material, sometimes the team disagreed with the opinions of the censors and sometimes the team agreed with the rationale of the censors. He says that "many people" told him that the team "succeeded in this endeavor" and that "many parents I know really enjoyed watching the show with their kids for just this reason".[28] John Pacenti said the series "seems very much aimed at adults" "for a children's cartoon".[29] Marsh believes that the material written by Doug Lawrence stands as an example of a "unique sense of humor". For instance, Marsh credits Lawrence with the "pineapple references" adding that Lawrence believed that pineapples seemed humorous.[28] The staff drew upon Looney Tunes and the Fleischer cartoons to appeal to a wide demographic: having a certain adult sensibility but also enjoyed by kids.[19] Rocko's Modern Life has been described as similar to that of the output of Warner Bros Cartoons in the Golden Age: a visually driven show heavy on humor, sight gags, and good animation. Instead of a finished script, the animators usually received a three-page outline, requiring them to come up with a majority of the gags and dialogue. The animation team appreciated this approach, with storyboard artist Jeff Myers, formerly of The Simpsons, quoted as saying "The script [at The Simpsons] was carved in stone. Here it's [...] more of a challenge and a lot more fun when we're given a rough outline."[30] Murray's animation lacked parallel lines and featured crooked architecture similar to various Chuck Jones cartoons. In an interview he stated that his design style contributed to the show's "wonky bent feel".[3] Jean Prescott of the Sun Herald described the series as "squash-and-stretch".[31] A 1993 Houston Chronicle article described the series' setting as having a "reality that is 'squashed and stretched' into a twisted version of real life".[32] The background staff hand-painted backgrounds with Dr. Martin Dyes,[21] while each episode title card consisted of an original painting.[21] Linda Simensky said that she asked the creators of Rocko's Modern Life about why the women in the series were drawn to be "top-heavy", the creators told her that they believed that drawing women "the traditional way" was easier. Simensky described the creators as "talented guys" who formed "a boy's club" and added that "we pushed them to be funny, but a lot of their women are stereotypical".[33] There are three versions of the Rocko's Modern Life theme song. The first and original version can be heard playing throughout every episode in Season 1 except for episode 8. The second version of the theme song was a slightly remixed version of the first and was used for episode 8. Version 2 had high pitched, distorted voices in the chorus. The third version of the theme song was performed by Kate Pierson and Fred Schneider from The B-52's. They performed the Rocko's Modern Life theme song from the rest of the series. At first Murray wanted Paul Sumares to perform the theme song since Sumares created most of the music found in My Dog Zero. Murray wanted the same style in My Dog Zero exhibited in Rocko's Modern Life. Nickelodeon wanted a person with more experience.[10] According to Sumares, believing for the request to be a long shot, Murray asked for Danny Elfman and felt stunned when Nickelodeon decided to honor his request by asking Elfman to perform.[10] According to Murray, Elfman, his first choice, was booked. Therefore, he chose the B-52's, his second choice.[10] According to Sumares Murray decided to use the B-52's instead of Elfman. Murray states that the difference between the stories "could just be a recollection conflict, because Paul is a brilliant amazing guy."[10] Murray also sought Alan Silvestri. According to Sumares Viacom did not want to use Silvestri as the organization wanted a band "slightly older kids could identify with."[10] Pat Irwin, a veteran of many bands, including the New York-based instrumental group the Raybeats, and, a side gig, the B-52s, spent five years as a music director on the series. Leading a six-piece combo, Irwin brought together musicians such as trombonist Art Baron and drummer Kevin Norton.[34] Rocko's Modern Life has been noted for its racy humor.[35] Adults made up more than one-fifth of the audience for the show during its run.[36] The series contained numerous adult innuendos, such as Rocko's brief stint as a telephone operator in the episode "Canned": the instructions on the wall behind him helpfully remind all employees to "Be Hot, Be Naughty, and Be Courteous" while he flatly repeats "Oh baby" into the receiver.[37] Joe Murray noted that the season one segment "Leap Frogs" received "some complaints from some parents" due to its sexual humor, leading to Nickelodeon removing the episode from air for the remainder of the show's run, although it later aired on the cable channel Nicktoons, and was made available on DVD and video streaming sites such as Netflix.[38] In "The Good, the Bad and the Wallaby", Heffer encounters a milking machine and finds pleasure, although only his reactions are shown onscreen.[39] According to writer/director Jeff "Swampy" Marsh, the scene was originally supposed to have hearts appearing in Heffer's eyes at the climactic moment. Although it clearly wasn't going to be included, they described the scene to Nickelodeon censors anyway: "We described the scene, and then waited for the axe to fall, but all they said was 'can you change the hearts to stars?', we said sure, and it went in." The scene, as well as a scene showing Heffer's break-up with the machine, were later removed.[40] They are intact in the Canadian broadcasts of the episode, however. In addition, the uncut version can still be found on the VHS "Rocko's Modern Life: With Friends Like These". There were at least two occurrences of immediate censorship of the series. The original broadcast of the segment "Road Rash" featured a scene in which Rocko and Heffer stop at what is suggested to be a love hotel (the "No-Tell Motel") advertising "hourly rates" and ask the horse desk clerk for a room, who infers the two will be engaging in intercourse: "All night? [whistles] Wheeeooo! Okay."[39][40] The first airing of "Hut Sut Raw" included a scene in which Rocko is picking berries; upon picking one lower on the bush, a bear rushes out whimpering and grasping his crotch.[37] This scene is untouched in Canada. Both scenes were edited by Nickelodeon after their first broadcasts and are the only instances of censorship on the season two DVD, released in 2012. On the season three DVD, the "Wacky Delly" segment was shortened by approximately ten seconds to remove footage of Sal Ami repeatedly whacking Betty Bologna over the head with a telephone receiver. In addition, the restaurant named "Chokey Chicken" (a term for masturbation) was renamed "Chewy Chicken" for the series' fourth season.[41] As the series entered reruns after cancellation, more scenes were cut. The entire episode "Leap Frogs", in which Bev Bighead attempts to seduce Rocko, was skipped.[40] When Shout! Factory announced a DVD retail release for the series, there were concerns on whether Nickelodeon would allow Shout! to release the series complete with some of the racier humor that the network eventually cut out for reruns.[42] In the end, Shout! Factory only received materials from sources that were edited for broadcast, so the episodes still remained censored on the DVDs.[35][43] The only uncut release of the show on DVD so far was published in Germany in October 2013, although this release is still missing the uncut version of "Road Rash".[44] Rocko's Modern Life first-ran on Nickelodeon from 1993 to 1996, and was briefly syndicated to local stations by Nick during 1995 and 1996.[45] In 2004, the show briefly returned to Nickelodeon as part of U-Pick Live's Old School Pick, with select episodes airing on June 1 and June 11. In the summer of 2006, the series once again returned to Nick as part of the Nick Rewind block, and in 2007, it was shown on Superstuffed Nicksgiving Weekend. Reruns of Rocko's Modern Life aired on Nicktoons in the United States from May 1, 2002 to September 5, 2011. In the UK the series premiered on Nickelodeon UK on November 6, 1993.[46] The series was also screened on Channel 4 from August 9, 1994 until 2000. From 2002 to 2016, it also aired on Nicktoons in the United Kingdom.[47] MTV picked up Rocko's Modern Life from Nickelodeon in early 1994. In Malaysia, Rocko's Modern Life was aired on MetroVision. The series was also shown in Ukraine on ICTV. Rocko's Modern Life aired again during Nickelodeon's The '90s Are All That revival block on TeenNick in the US from September 5 to September 23, 2011, and from February 11 to March 1, 2013.[48] On the night leading into April Fools' Day 2013, TeenNick aired a prank "lost episode" of the series consisting solely of a still picture of a mayonnaise jar.[49] This is a reference to the two-part episode "Wacky Delly", in which the characters attempt to sabotage the show-within-a-show, Wacky Delly. The show then returned to the block, renamed The Splat, on October 6, 2015. In Australia, it aired on the free-to-air ABC from 1995 to 1999, and was broadcast on Nickelodeon from 1995 to 2014 and returned in 2015 to January 2016 to celebrate Nickelodeon's 20th Anniversary in Australia. In the early 2000s, Nickelodeon Japan marketed the show along with The Ren & Stimpy Show.[50] Murray said that the cartoon "resonated" with people because the scenarios depicted in the cartoon involving "the neurosis, the daily chores of everyday life" were based on Murray's own experiences "breaking out into the world" after leaving school.[51] The show was debuted in a preview on September 18, 1993, and officially premiered the following morning, to join Nickelodeon's Sunday morning animation block.[52] On September 18, the series' first night of airing, Rocko's Modern Life received a 3.0 in ratings. By January 31, 1994 the series' audience grew by 65%.[13] Rocko's Modern Life was at the time the network's highest-rated cartoon launch ever.[53] There was a brief period in 1993 when the network received numerous complaints from members of a religious group that Ren & Stimpy and Rocko's Modern Life were too adult-oriented to be shown to kids on Sunday mornings. They wanted the shows moved to a different time slot. The network was polite but did not make the programming change.[54] Initial reviews of Rocko's Modern Life were positive. The Miami Herald ran an article about series that were "rais[ing] the standards for children's programming", singling out Rocko's Modern Life as "definitely worth a look".[55] Jennifer Mangan of the Chicago Tribune likened the series to The Simpsons, noting the show as another example of adult animation that is "not for kids".[56] Newsday highlighted the show's twisted sight gags.[52] Ted Drozdowski of The Boston Phoenix stated in the "Eye pleasers" article that he enjoyed Rocko's Modern Life because of "jovial excitement", "good-hearted outrage", "humanity", and "pushy animated characterizations".[57] However, not all reviews were positive. Ken Tucker of Entertainment Weekly described the series as "a witless rip-off of Ren & Stimpy: mucus jokes without the redeeming surrealism or contempt for authority."[58] Charles Solomon of the Los Angeles Times called the series "rock bottom" and a "tasteless attempt to capture the Ren & Stimpy audience", mostly expressing displeasure at the crass humor.[59] Common Sense Media reviewer Andrea Graham, whose review is posted on Go.com, describes Rocko's Modern Life as "somewhat edgy" and gave the series four out of five stars. Graham also warned parents to watch for "innuendos".[60] The show has seen renewed acclaim. Brahna Siegelberg of Slate said that the aspect that was most compelling was that the show had "a really poignant critique of the materialist demands of American life". She added that she "realized that Rocko was really a show about how to navigate the adult world; one that could be appreciated by kids for its slapstick humor and absurdity, but had even more to say to young adultslike me".[61] IGN called the show a prime example of the "sophisticated, intelligent brand of children's programming" during Nickelodeon's golden age.[62] The A.V. Club also called the show "one of the best series" from that era, praising the show's "impressive commitment to expressive character acting, well-drawn sight gags, and cartoony jokes that play with the form's slapstick strengths."[8] New York compared the series' humor, in retrospect, to that of Office Space (1999) and praised the subversive, anti-corporate stories.[63] Timothy J. Borquez, Patrick Foley, Michael Giesler, Michael A. Gollorn, William B. Griggs, Tom Jeager, Gregory LaPlante, Timothy Mertens, and Kenneth Young of Rocko's Modern Life received a 1993 Daytime Emmy Award for Outstanding Film Sound Editing.[64] George Maestri was nominated for a CableACE Award for his Rocko's Modern Life writing.[65][66] The series won an Environmental Media Award in 1996 for the episode "Zanzibar!", a musical episode focusing on environmentalism, pollution, and deforestation.[67] The award was accepted by the episode's writers, Dan Povenmire and Jeff "Swampy" Marsh, future creators of the hit Disney animated series, Phineas and Ferb.[68] The fourth Nicktoon to debut, Rocko's boasts a sizable cult fan-base to this day.[8] Tom Kenny cited Rocko's Modern Life as vital in him learning how to do voiceover for animation. He recalled that seeing Charlie Adler have a two-way conversation with himself as the Bigheads without any edits was "dazzling".[19] Kenny described the show's impact in an interview, saying, "Rocko's Modern Life was just one of those shows that were the first break for a lot of people who went on to do other stuff in the business."[69] Some members of the Rocko's Modern Life staff created other successful ventures. Stephen Hillenburg pitched SpongeBob SquarePants to Nickelodeon in 1997. Murray said of the pitch, "If it goes well, it'll be a blessing to us all."[3] The network bought the show, which premiered in 1999, and it became a popular, critical and financial success, and one of the biggest shows on Nick. Hillenburg stated that he "learned a great deal about writing and producing animation for TV" from his time on Rocko's Modern Life.[70] Two writers for the series, Dan Povenmire and Jeff "Swampy" Marsh, went on to create Phineas and Ferb for the Disney Channel; the show became a ratings success and received numerous award nominations.[71] When Murray returned with a new animated series, Camp Lazlo, in 2005, much of the former staff of Rocko's Modern Life joined him.[2] Murray stated that "We always kept in touch and they told me to look them up if I ever did another project", adding that the crew already knew his sensibilities and an extra decade worth of experience. Carlos Alazraqui, who played Rocko, also ended up playing the main character of Lazlo.[2] Derek Drymon and Nick Jennings, both part of the staff, went on to be responsible for the tone and visual looks of a lot of very successful animated series that came later.[19] By January 31, 1994 Nickelodeon received ten "licensing partners" for merchandise for the series.[13] Hardee's distributed Rocko toys.[72] Viacom New Media released one game based on the show, Rocko's Modern Life: Spunky's Dangerous Day, in the United States for the Super Nintendo Entertainment System. In addition, Microsoft's Nickelodeon 3-D Movie Maker features various characters from the show. Rocko also appeared in the game Nicktoons: Attack of the Toybots. Rocko and Heffer also make a cameo appearance in Nicktoons MLB. Nick.com created two free online games featuring Rocko, using Shockwave Flash (which requires the Shockwave plugin).[73][74] Hot Topic sells Rocko's Modern Life merchandise such as T-shirts, wrist bands, key chains and other items as part of their Nick Classic line. In 1997, plushes of Rocko, Spunky, and Heffer were released. They are hard to find in the present day and age, and in 2016, a different Rocko plush was released. During Tom DeFalco's Editor-in-Chief career, Marvel Comics produced a seven-issue Rocko's Modern Life comic book series.[75] Marvel published the series from June 1994 to December 1994 with monthly releases. Nickelodeon approached Marvel, asking the company to produce comic book series for Rocko's Modern Life and Ren and Stimpy. Marvel purchased the license for Rocko from Nickelodeon. The staff created the comics, and Susan Luposniak, a Nickelodeon employee,[76] examined the comics before they were released.[77] Joe Murray said in a December 2, 2008 blog entry that he drew some of the pages in the comic book series.[78] The comics contain stories not seen in the television show. In addition, the comic book series omits some television show characters and places, while some original places and characters appear in the comics. John "Lewie" Lewandowski wrote all of the stories except for one; Joey Cavalieri wrote "Beaten by a Club", the second story of Issue #4. Troy Little, a resident of Monroe, Oregon, wrote to Marvel requesting that the title for the comic's letters column should be "That's Life". In Issue 3, published in August 1994, the editors decided to use the title for the comic's "Letters to the Editor" section.[76][77] In Issue 5, published in October 1994, the editors stated that they were still receiving suggestions for the title of the comic even though they had decided on using "That's Life" by Issue 3.[79] On December 6, 2017, Boom! Studios began publishing a new Rocko's Modern Life comic book series.[80] Fans have requested that Nickelodeon produce a DVD collection of the series for years. Murray has often got e-mails from fans, and his top question was "When will Rocko be on DVD?"[7] Prior to the official DVD releases, Murray stated that he had not heard of any plans for a DVD release and that there are several illegal DVD releases of the series sold on eBay. He commented, "But at least someone is trying to give Rocko fans what they want. Because Nickelodeon sure isn't doing it."[81] Murray worked with his legal team to regain the rights, and an official DVD was released.[82] The first home video release of the series in the United States was in 1995, when selected episodes were released on VHS by Sony Wonder.[83] Sony Wonder used Rocko's Modern Life, alongside as "leading brands" in order for the company to break into the market.[84] In addition, the "How to Tell if Your Dog is Brainless" short can only be found on the Sony Wonder version of the VHS "Rocko's Modern Life: Machine Madness". Paramount Home Entertainment re-released the tapes in 1997.[85][86] In July 2008, Rocko's Modern Life was added to the iTunes Store as a part of the "Nick Rewind" collection, in four best-of volumes.[87] Eventually, in August 2008, Nickelodeon joined forces with CreateSpace, part of the Amazon.com Inc. group of companies, to make a number of animated and live-action shows available on DVD, many for the first time. The DVDs were published via CreateSpace DVD on Demand, a service that manufactures discs as soon as customers order them on Amazon.com. Rocko's Modern Life was available in two best-of collections, released in 2008[88][89] and a third best-of collection in 2009. All four seasons were available in streaming format on Netflix until May 31, 2013.[90] In March 2011, Shout! Factory announced that they would release Season 1 in an official box set on June 21, 2011. The two-disc set received relatively positive reviews, only receiving criticism for video quality and the lack of bonus features.[43] According to Joe Murray's website, he struck a deal with Shout! Factory to create the artwork for the Season 2 set; the special features were yet to be announced when he wrote the entry.[91] Season 2 was released on February 7, 2012,[92] with Season 3 following on July 3, 2012.[93] On December 3, 2012, creator Joe Murray announced due to strong DVD sales of the first three seasons, Shout! Factory would release Rocko's Modern Life: The Complete Series on DVD on February 26, 2013, along with bonus material from the Rocko's Live event from October 2012; Murray also mentioned that Season 4 would be released soon after the complete series set was released.[94] On February 26, 2013, the entire fifty-two episode series was made available in the United States and Canada.[95] The fourth and final season was released on October 15, 2013.[96] In Australia, the first 3 seasons are available on DVD. Season 1 and Season 2 were released on April 3, 2013.[97][98] Season 3 was released on June 5, 2013.[99] On August 1, 2016, a Collector's Edition box set which contains all four seasons was released. It is not known if season four has been released individually. Also released is a Limited Edition 3D artwork for Season One [100] and Season Two.[101] Exclusive DVDs can still be bought at JB Hi-Fi or rented at Video Ezy and Blockbuster Video. Extras: 'Pilot ("Trash-O-Madness")', 'Behind the characters with series creator Joe Murray: Rocko, Heffer, Filburt and The Bigheads' Extra: 'Selected scene commentary by creator Joe Murray' Extra: '"Wacky Delly" Live 2012' Extras: All special features (except season one) The complete series was released in Germany on October 4, 2013. The limited edition eight-disc set includes a 3D card, sticker set, postcards, episode guide and poster, as well as bonus features included on the discs.[102] Since the show was aired uncensored on Nickelodeon Germany in the mid-'90s, the German publishers were able to reconstruct a nearly uncensored release of the show. Although this release is still missing the uncut version of "Road Rash". So far, it is the only official DVD box set available that is almost completely uncut. "The Best of Rocko's Modern Life" was released in the United Kingdom in 2012 as four one-disc volumes. These were released exclusively for Poundland stores. Plans for an official release/complete series set in the UK have not been announced. In September 2015, Nickelodeon stated that some of its old properties are being considered for revivals, and that Rocko's Modern Life was one of them.[103] On August 11, 2016, Nickelodeon announced that they had greenlit a one-hour TV special, with Joe Murray as executive producer.[104] Murray revealed to Motherboard that in the special, Rocko will come back to O-Town after being in space for 20 years, and that it will focus on people's reliance on modern technology.[105] On June 22, 2017, it was announced that the title of the special would be Rocko's Modern Life: Static Cling and that it would air in 2018. They also reconfirmed that the entire main cast and recurring cast would be reprising their roles, alongside new voice actors Steve Little and co-director Cosmo Segurson.[106] A special sneak peek was released to coincide with the Rocko panel at San Diego Comic Con 2017.[107]
When did the us institute an income tax?
during the Civil War🚨The history of taxation in the United States begins with the colonial protest against British taxation policy in the 1760s, leading to the American Revolution. The independent nation collected taxes on imports ("tariffs"), whiskey, and (for a while) on glass windows. States and localities collected poll taxes on voters and property taxes on land and commercial buildings. There are state and federal excise taxes. State and federal inheritance taxes began after 1900, while the states (but not the federal government) began collecting sales taxes in the 1930s. The United States imposed income taxes briefly during the Civil War and the 1890s. In 1913, the 16th Amendment was ratified, permanently legalizing an income tax. Taxes were low at the local, colonial and imperial levels throughout the colonial era.[1] The issue that led to the Revolution was whether parliament had the right to impose taxes on the Americans when they were not represented in parliament. The Stamp Act of 1765 was the fourth Stamp Act to be passed by the Parliament of Great Britain and required all legal documents, permits, commercial contracts, newspapers, wills, pamphlets, and playing cards in the American colonies to carry a tax stamp. The exact date the Act was enacted was on November 1, 1765. The Act was enacted in order to defray the cost of maintaining the military presence protecting the colonies. Americans rose up in strong protest, arguing in terms of "No Taxation without Representation". Boycotts forced Britain to repeal the stamp tax, while convincing many British leaders it was essential to tax the colonists on something in order to demonstrate the sovereignty of Parliament. The Townshend Revenue Act were two tax laws passed by Parliament in 1767; they were proposed by Charles Townshend, Chancellor of the Exchequer. They placed a tax on common products imported into the American Colonies, such as lead, paper, paint, glass, and tea. In contrast to the Stamp Act of 1765, the laws were not a direct tax that people paid daily, but a tax on imports that was collected from the ship's captain when he unloaded the cargo. The Townshend Acts also created three new admiralty courts to try Americans who ignored the laws.[2] The tax on sugar, cloth and coffee. These were non-British exports. The Tea Act of 1773 received the royal assent on May 10, 1773. This act was a "drawback on duties and tariffs" on tea. The act was designed to undercut tea smugglers to the benefit of the East India Company. The Boston Tea Party was an act of protest by the American colonists against Great Britain for the Tea Act in which they dumped many chests of tea into Boston Harbor. The cuts to taxation on tea undermined American smugglers, who destroyed the tea in retaliation for its exemption from taxes. Britain reacted harshly, and the conflict escalated to war in 1775. An assessment levied by the government upon a person at a fixed rate regardless of income or worth. Tariffs have played different roles in trade policy and the economic history of the United States. Tariffs were the largest source of federal revenue from the 1790s to the eve of World War I, until it was surpassed by income taxes. Since the revenue from the tariff was considered essential and easy to collect at the major ports, it was agreed the nation should have a tariff for revenue purposes.[3][4] Another role the tariff played was in the protection of local industry; it was the political dimension of the tariff. From the 1790s to the present day, the tariff (and closely related issues such as import quotas and trade treaties) generated enormous political stresses. These stresses lead to the Nullification crisis during the 19th century, and the creation of the World Trade Organization. When Alexander Hamilton was the United States Secretary of the Treasury he issued the Report on Manufactures, which reasoned that applying tariffs in moderation, in addition to raising revenue to fund the federal government, would also encourage domestic manufacturing and growth of the economy by applying the funds raised in part towards subsidies (called bounties in his time) to manufacturers. The main purposes sought by Hamilton through the tariff were to: (1) protect American infant industry for a short term until it could compete; (2) raise revenue to pay the expenses of government; (3) raise revenue to directly support manufacturing through bounties (subsidies).[5] This resulted in the passage of three tariffs by Congress, the Tariff of 1789, the Tariff of 1790, and the Tariff of 1792 which progressively increased tariffs. Tariffs contributed to sectionalism between the North and the South. The Tariff of 1824 increased tariffs in order to protect American industry in the face of cheaper imported commodities such as iron products, wool and cotton textiles, and agricultural goods from England. This tariff was the first in which the sectional interests of the North and the South truly came into conflict because the South advocated lower tariffs in order to take advantage of tariff reciprocity from England and other countries that purchased raw agricultural materials from the South.[citation needed] The Tariff of 1828, also known as the Tariff of Abominations, and the Tariff of 1832 accelerated sectionalism between the North and the South. For a brief moment in 1832, South Carolina made vague threats to leave the Union over the tariff issue.[6] In 1833, to ease North-South relations, Congress lowered the tariffs.[6] In the 1850s, the South gained greater influence over tariff policy and made subsequent reductions.[7] In 1861, just prior to the Civil War, Congress enacted the Morrill Tariff, which applied high rates and inaugurated a period of relatively continuous trade protection in the United States that lasted until the Underwood Tariff of 1913. The schedule of the Morrill Tariff and its two successor bills were retained long after the end of the Civil War.[8] In 1921, Congress sought to protect local agriculture as opposed to industry by passing the Emergency Tariff, which increased rates on wheat, sugar, meat, wool and other agricultural products brought into the United States from foreign nations, which provided protection for domestic producers of those items. However, one year later Congress passed another tariff, the Fordney-McCumber Tariff, which applied the scientific tariff and the American Selling Price. The purpose of the scientific tariff was to equalize production costs among countries so that no country could undercut the prices charged by American companies.[9] The difference of production costs was calculated by the Tariff Commission. A second novelty was the American Selling Price. This allowed the president to calculate the duty based on the price of the American price of a good, not the imported good.[9] During the outbreak of the Great Depression in 1930, Congress raised tariffs via the Smoot-Hawley Tariff Act on over 20,000 imported goods to record levels, and, in the opinion of most economists, worsened the Great Depression by causing other countries to reciprocate thereby plunging American imports and exports by more than half.[citation needed] In 1948, the US signed the General Agreement on Tariffs and Trade (GATT), which reduced tariff barriers and other quantitative restrictions and subsidies on trade through a series of agreements. In 1993, the GATT was updated (GATT 1994) to include new obligations upon its signatories. One of the most significant changes was the creation of the World Trade Organization (WTO). Whereas GATT was a set of rules agreed upon by nations, the WTO is an institutional body. The WTO expanded its scope from traded goods to trade within the service sector and intellectual property rights. Although it was designed to serve multilateral agreements, during several rounds of GATT negotiations (particularly the Tokyo Round) plurilateral agreements created selective trading and caused fragmentation among members. WTO arrangements are generally a multilateral agreement settlement mechanism of GATT.[10] Federal excise taxes are applied to specific items such as motor fuels, tires, telephone usage, tobacco products, and alcoholic beverages. Excise taxes are often, but not always, allocated to special funds related to the object or activity taxed. During the presidency of George Washington, Alexander Hamilton proposed a tax on distilled spirits to fund his policy of assuming the war debt of the American Revolution for those states which had failed to pay. After a vigorous debate, the House decided by a vote of 35-21 to approve legislation imposing a seven-cent-per-gallon excise tax on whiskey. This marks the first time in American history that Congress voted to tax an American product; this led to the Whiskey Rebellion. The history of income taxation in the United States began in the 19th century with the imposition of income taxes to fund war efforts. However, the constitutionality of income taxation was widely held in doubt [Pollock v. Farmers' Loan & Trust Company, 157 U.S. 429 (1895)] "[11] until 1913 with the ratification of the 16th Amendment. Article I, Section 8, Clause 1 of the United States Constitution assigns Congress the power to impose "Taxes, Duties, Imposts and Excises," but Article I, Section 8 requires that, "Duties, Imposts and Excises shall be uniform throughout the United States."[12] In addition, the Constitution specifically limited Congress' ability to impose direct taxes, by requiring it to distribute direct taxes in proportion to each state's census population. It was thought that head taxes and property taxes (slaves could be taxed as either or both) were likely to be abused, and that they bore no relation to the activities in which the federal government had a legitimate interest. The fourth clause of section 9 therefore specifies that, "No Capitation, or other direct, Tax shall be laid, unless in Proportion to the Census or enumeration herein before directed to be taken." Taxation was also the subject of Federalist No. 33 penned secretly by the Federalist Alexander Hamilton under the pseudonym Publius. In it, he explains that the wording of the "Necessary and Proper" clause should serve as guidelines for the legislation of laws regarding taxation. The legislative branch is to be the judge, but any abuse of those powers of judging can be overturned by the people, whether as states or as a larger group. What seemed to be a straightforward limitation on the power of the legislature based on the subject of the tax proved inexact and unclear when applied to an income tax, which can be arguably viewed either as a direct or an indirect tax. The courts have generally held that direct taxes are limited to taxes on people (variously called "capitation", "poll tax" or "head tax") and property.[13] All other taxes are commonly referred to as "indirect taxes".[14] In order to help pay for its war effort in the American Civil War, Congress imposed its first personal income tax in 1861.[15] It was part of the Revenue Act of 1861 (3% of all incomes over US $800; rescinded in 1872). Congress also enacted the Revenue Act of 1862, which levied a 3% tax on incomes above $600, rising to 5% for incomes above $10,000. Rates were raised in 1864. This income tax was repealed in 1872. A new income tax statute was enacted as part of the 1894 Tariff Act.[16][17] At that time, the United States Constitution specified that Congress could impose a "direct" tax only if the law apportioned that tax among the states according to each state's census population.[18] In 1895, the United States Supreme Court ruled, in Pollock v. Farmers' Loan & Trust Co., that taxes on rents from real estate, on interest income from personal property and other income from personal property (which includes dividend income) were direct taxes on property and therefore had to be apportioned. Since apportionment of income taxes is impractical, the Pollock rulings had the effect of prohibiting a federal tax on income from property. Due to the political difficulties of taxing individual wages without taxing income from property, a federal income tax was impractical from the time of the Pollock decision until the time of ratification of the Sixteenth Amendment (below). In response to the Supreme Court decision in the Pollock case, Congress proposed the Sixteenth Amendment, which was ratified in 1913,[19] and which states: The Congress shall have power to lay and collect taxes on incomes, from whatever source derived, without apportionment among the several States, and without regard to any census or enumeration. The Supreme Court in Brushaber v. Union Pacific Railroad, 240 U.S. 1 (1916), indicated that the Sixteenth Amendment did not expand the federal government's existing power to tax income (meaning profit or gain from any source) but rather removed the possibility of classifying an income tax as a direct tax on the basis of the source of the income. The Amendment removed the need for the income tax on interest, dividends and rents to be apportioned among the states on the basis of population. Income taxes are required, however, to abide by the law of geographical uniformity. Congress enacted an income tax in October 1913 as part of the Revenue Act of 1913, levying a 1% tax on net personal incomes above $3,000, with a 6% surtax on incomes above $500,000. By 1918, the top rate of the income tax was increased to 77% (on income over $1,000,000, equivalent of 15,300,000 in 2012 dollars[20]) to finance World War I. The average rate for the rich however, was only 15%.[21] The top marginal tax rate was reduced to 58% in 1922, to 25% in 1925 and finally to 24% in 1929. In 1932 the top marginal tax rate was increased to 63% during the Great Depression and steadily increased, reaching 94% (on all income over $200,000, equivalent of 2,500,000 in 2012 dollars[22])in 1945. During World War II, Congress introduced payroll withholding and quarterly tax payments.[23] Following World War II tax increases, top marginal individual tax rates stayed near or above 90%, and the effective tax rate at 70% for the highest incomes (few paid the top rate), until 1964 when the top marginal tax rate was lowered to 70%. Kennedy explicitly called for a top rate of 65 percent, but added that it should be set at 70 percent if certain deductions weren't phased out at the top of the income scale.[24][25][26] The top marginal tax rate was lowered to 50% in 1982 and eventually to 28% in 1988. It slowly increased to 39.6% in 2000, then was reduced to 35% for the period 2003 through 2012.[23] Corporate tax rates were lowered from 48% to 46% in 1981 (PL 97-34), then to 34% in 1986 (PL 99-514), and increased to 35% in 1993. Timothy Noah, senior editor of the New Republic, argues that while Ronald Reagan made massive reductions in the nominal marginal income tax rates with his Tax Reform Act of 1986, this reform did not make a similarly massive reduction in the effective tax rate on the higher marginal incomes. Noah writes in his ten part series entitled "The Great Divergence," that in 1979, the effective tax rate on the top 0.01 percent of taxpayers was 42.9 percent, according to the Congressional Budget Office, but that by Reagan's last year in office it was 32.2%. This effective rate on high incomes held steadily until the first few years of the Clinton presidency when it increased to a peak high of 41%. However, it fell back down to the low 30s by his second term in the White House. This percentage reduction in the effective marginal income tax rate for the wealthiest Americans, 9%, is not a very large decrease in their tax burden, according to Noah, especially in comparison to the 20% drop in nominal rates from 1980 to 1981 and the 15% drop in nominal rates from 1986 to 1987. In addition to this small reduction on the income taxes of the wealthiest taxpayers in America, Noah discovered that the effective income tax burden for the bottom 20% of wage earners was 8% in 1979 and dropped to 6.4% under the Clinton Administration. This effective rate further dropped under the George W. Bush Administration. Under Bush, the rate decreased from 6.4% to 4.3%. Looking at the simple math, reductions in the effective income tax burden on the poor coinciding with modest reductions in the effective income tax rate on the wealthiest 0.01% of tax payers could not alone have been the direct cause of increased income inequality that began in the 1980s.[27] These figures also correspond to an analysis of effective tax rates from 1979ÿ2005 by the Congressional Budget Office.[28] Congress re-adopted the income tax in 1913, levying a 1% tax on net personal incomes above $3,000, with a 6% surtax on incomes above $500,000. By 1918, the top rate of the income tax was increased to 77% (on income over $1,000,000) to finance World War I. The top marginal tax rate was reduced to 58% in 1922, to 25% in 1925, and finally to 24% in 1929. In 1932 the top marginal tax rate was increased to 63% during the Great Depression and steadily increased. During World War II, Congress introduced payroll withholding and quarterly tax payments. In pursuit of equality (rather than revenue) President Franklin D. Roosevelt proposed a 100% tax on all incomes over $25,000.[30][31] When Congress did not enact that proposal, Roosevelt issued an executive order attempting to achieve a similar result through a salary cap on certain salaries in connection with contracts between the private sector and the federal government.[32][33][34] For tax years 1944 through 1951, the highest marginal tax rate for individuals was 91%, increasing to 92% for 1952 and 1953, and reverting to 91% for tax years 1954 through 1963.[35] For the 1964 tax year, the top marginal tax rate for individuals was lowered to 77%, and then to 70% for tax years 1965 through 1981. In 1978 income brackets were adjusted for inflation, so fewer people were taxed at high rates.[36] The top marginal tax rate was lowered to 50% for tax years 1982 through 1986.[37] Reagan undid 40% of his 1981 tax cut, in 1983 he hiked gas and payroll taxes, and in 1984 he raised tax revenue by closing loopholes for businesses.[38] According to historian and domestic policy adviser Bruce Bartlett, Reagan's 12 tax increases over the course of his presidency took back half of the 1981 tax cut.[39] For tax year 1987, the highest marginal tax rate was 38.5% for individuals.[40] It was lowered to 28% in revenue neutral fashion, eliminating many loopholes and shelters, along with in corporate taxes, (with a 33% "bubble rate") for tax years 1988 through 1990.[41][42] Ultimately, the combination of base broadening and rate reduction raised revenue equal to about 4% of existing tax revenue[43] For the 1991 and 1992 tax years, the top marginal rate was increased to 31% in a budget deal President George H. W. Bush made with the Congress.[44] In 1993 the Clinton administration proposed and the Congress accepted (with no Republican support) an increase in the top marginal rate to 39.6% for the 1993 tax year, where it remained through tax year 2000.[45] In 2001, President George W. Bush proposed and the Congress accepted an eventual lowering of the top marginal rate to 35%. However, this was done in stages: with a highest marginal rate of 39.1% for 2001, then 38.6% for 2002 and finally 35% for years 2003 through 2010.[46] This measure had a sunset provision and was scheduled to expire for the 2011 tax year, when rates would have returned to those adopted during the Clinton years unless Congress changed the law;[47] Congress did so by passing the Tax Relief, Unemployment Insurance Reauthorization and Job Creation Act of 2010, signed by President Barack Obama on December 17, 2010. At first the income tax was incrementally expanded by the Congress of the United States, and then inflation automatically raised most persons into tax brackets formerly reserved for the wealthy until income tax brackets were adjusted for inflation. Income tax now applies to almost two-thirds of the population.[48] The lowest earning workers, especially those with dependents, pay no income taxes as a group and actually get a small subsidy from the federal government because of child credits and the Earned Income Tax Credit.[citation needed] While the government was originally funded via tariffs upon imported goods, tariffs now represent only a minor portion of federal revenues. Non-tax fees are generated to recompense agencies for services or to fill specific trust funds such as the fee placed upon airline tickets for airport expansion and air traffic control. Often the receipts intended to be placed in "trust" funds are used for other purposes, with the government posting an IOU ('I owe you') in the form of a federal bond or other accounting instrument, then spending the money on unrelated current expenditures. Net long-term capital gains as well as certain types of qualified dividend income are taxed preferentially. The federal government collects several specific taxes in addition to the general income tax. Social Security and Medicare are large social support programs which are funded by taxes on personal earned income (see below). Tax statutes passed after the ratification of the Sixteenth Amendment in 1913 are sometimes referred to as the "modern" tax statutes. Hundreds of Congressional acts have been passed since 1913, as well as several codifications (i.e., topical reorganizations) of the statutes (see Codification). The modern interpretation of the Sixteenth Amendment taxation power can be found in Commissioner v. Glenshaw Glass Co. 348 U.S. 426 (1955). In that case, a taxpayer had received an award of punitive damages from a competitor, and sought to avoid paying taxes on that award. The U.S. Supreme Court observed that Congress, in imposing the income tax, had defined income to include: gains, profits, and income derived from salaries, wages, or compensation for personal service . . . of whatever kind and in whatever form paid, or from professions, vocations, trades, businesses, commerce, or sales, or dealings in property, whether real or personal, growing out of the ownership or use of or interest in such property; also from interest, rent, dividends, securities, or the transaction of any business carried on for gain or profit, or gains or profits and income derived from any source whatever.[49] The Court held that "this language was used by Congress to exert in this field the full measure of its taxing power", id., and that "the Court has given a liberal construction to this broad phraseology in recognition of the intention of Congress to tax all gains except those specifically exempted."[50] The Court then enunciated what is now understood by Congress and the Courts to be the definition of taxable income, "instances of undeniable accessions to wealth, clearly realized, and over which the taxpayers have complete dominion." Id. at 431. The defendant in that case suggested that a 1954 rewording of the tax code had limited the income that could be taxed, a position which the Court rejected, stating: The definition of gross income has been simplified, but no effect upon its present broad scope was intended. Certainly punitive damages cannot reasonably be classified as gifts, nor do they come under any other exemption provision in the Code. We would do violence to the plain meaning of the statute and restrict a clear legislative attempt to bring the taxing power to bear upon all receipts constitutionally taxable were we to say that the payments in question here are not gross income.[51] In Conner v. United States,[52] a couple had lost their home to a fire, and had received compensation for their loss from the insurance company, partly in the form of hotel costs reimbursed. The U.S. District Court acknowledged the authority of the IRS to assess taxes on all forms of payment, but did not permit taxation on the compensation provided by the insurance company, because unlike a wage or a sale of goods at a profit, this was not a gain. As the court noted, "Congress has taxed income, not compensation".[53] By contrast, at least two Federal courts of appeals have indicated that Congress may constitutionally tax an item as "income," regardless of whether that item is in fact income. See Penn Mutual Indemnity Co. v. Commissioner[54] and Murphy v. Internal Revenue Serv.[55] The origins of the estate and gift tax occurred during the rise of the state inheritance tax in the late 19th century and the progressive era. In the 1880s and 1890s many states passed inheritance taxes, which taxed the donees on the receipt of their inheritance. While many objected to the application of an inheritance tax, some including Andrew Carnegie and John D. Rockefeller supported increases in the taxation of inheritance.[56] At the beginning of the 20th century President Theodore Roosevelt advocated the application of a progressive inheritance tax on the federal level.[57] In 1916, Congress adopted the present federal estate tax, which instead of taxing the wealth that a donee inherited as occurred in the state inheritance taxes it taxed the wealth of a donor's estate upon transfer. Later, Congress passed the Revenue Act of 1924, which imposed the gift tax, a tax on gifts given by the donor. In 1948 Congress allowed marital deductions for the estate and the gift tax. In 1981, Congress expanded this deduction to an unlimited amount for gifts between spouses.[58] Today, the estate tax is a tax imposed on the transfer of the "taxable estate" of a deceased person, whether such property is transferred via a will or according to the state laws of intestacy. The estate tax is one part of the Unified Gift and Estate Tax system in the United States. The other part of the system, the gift tax, imposes a tax on transfers of property during a person's life; the gift tax prevents avoidance of the estate tax should a person want to give away his/her estate just before dying. In addition to the federal government, many states also impose an estate tax, with the state version called either an estate tax or an inheritance tax. Since the 1990s, the term "death tax" has been widely used by those who want to eliminate the estate tax, because the terminology used in discussing a political issue affects popular opinion.[59] If an asset is left to a spouse or a charitable organization, the tax usually does not apply. The tax is imposed on other transfers of property made as an incident of the death of the owner, such as a transfer of property from an intestate estate or trust, or the payment of certain life insurance benefits or financial account sums to beneficiaries. Prior to the Great Depression, the following economic problems were considered great hazards to working-class Americans: In the 1930s, the New Deal introduced Social Security to rectify the first three problems (retirement, injury-induced disability, or congenital disability). It introduced the FICA tax as the means to pay for Social Security. In the 1960s, Medicare was introduced to rectify the fourth problem (health care for the elderly). The FICA tax was increased in order to pay for this expense. President Franklin D. Roosevelt introduced the Social Security (FICA) Program. FICA began with voluntary participation, participants would have to pay 1% of the first $1,400 of their annual incomes into the Program, the money the participants elected to put into the Program would be deductible from their income for tax purposes each year, the money the participants put into the independent "Trust Fund" rather than into the General operating fund, and therefore, would only be used to fund the Social Security Retirement Program, and no other Government program, and, the annuity payments to the retirees would never be taxed as income.[citation needed] During the Lyndon B. Johnson administration Social Security moved from the trust fund to the general fund.[citation needed] Participants may not have an income tax deduction for Social Security withholding.[citation needed] Immigrants became eligible for Social Security benefits during the Carter administration.[citation needed] During the Reagan administration Social Security annuities became taxable.[60] The alternative minimum tax (AMT) was introduced by the Tax Reform Act of 1969,[61] and became operative in 1970. It was intended to target 155 high-income households that had been eligible for so many tax benefits that they owed little or no income tax under the tax code of the time.[62] In recent years, the AMT has been under increased attention. With the Tax Reform Act of 1986, the AMT was broadened and refocused on home owners in high tax states. Because the AMT is not indexed to inflation and recent tax cuts,[62][63] an increasing number of middle-income taxpayers have been finding themselves subject to this tax. In 2006, the IRS's National Taxpayer Advocate's report highlighted the AMT as the single most serious problem with the tax code. The advocate noted that the AMT punishes taxpayers for having children or living in a high-tax state, and that the complexity of the AMT leads to most taxpayers who owe AMT not realizing it until preparing their returns or being notified by the IRS. [2] The origins of the income tax on gains from capital assets did not distinguish capital gains from ordinary income. From 1913 to 1921, income from capital gains were taxed at ordinary rates, initially up to a maximum rate of 7 percent.[64] Congress began to distinguish the taxation of capital gains from the taxation of ordinary income according to the holding period of the asset with the Revenue Act of 1921, allowed a tax rate of 12.5 percent gain for assets held at least two years.[64] In addition to different tax rates depending on holding period, Congress began excluding certain percentages of capital gains depending on holding period. From 1934 to 1941, taxpayers could exclude percentages of gains that varied with the holding period: 20, 40, 60, and 70 percent of gains were excluded on assets held 1, 2, 5, and 10 years, respectively.[64] Beginning in 1942, taxpayers could exclude 50 percent of capital gains from income on assets held at least six months or elect a 25 percent alternative tax rate if their ordinary tax rate exceeded 50 percent.[64] Capital gains tax rates were significantly increased in the 1969 and 1976 Tax Reform Acts.[64] The 1970s and 1980s saw a period of oscillating capital gains tax rates. In 1978, Congress reduced capital gains tax rates by eliminating the minimum tax on excluded gains and increasing the exclusion to 60 percent, thereby reducing the maximum rate to 28 percent.[64] The 1981 tax rate reductions further reduced capital gains rates to a maximum of 20 percent. Later in the 1980s Congress began increasing the capital gains tax rate and repealing the exclusion of capital gains. The Tax Reform Act of 1986 repealed the exclusion from income that provided for tax-exemption of long term capital gains, raising the maximum rate to 28 percent (33 percent for taxpayers subject to phaseouts).[64] When the top ordinary tax rates were increased by the 1990 and 1993 budget acts, an alternative tax rate of 28 percent was provided.[64] Effective tax rates exceeded 28 percent for many high-income taxpayers, however, because of interactions with other tax provisions.[64] The end of the 1990s and the beginning of the present century heralded major reductions in taxing the income from gains on capital assets. Lower rates for 18-month and five-year assets were adopted in 1997 with the Taxpayer Relief Act of 1997.[64] In 2001, President George W. Bush signed the Economic Growth and Tax Relief Reconciliation Act of 2001, into law as part of a $1.35 trillion tax cut program. The United States' corporate tax rate was at its highest, 52.8 percent, in 1968 and 1969. The top rate was hiked last in 1993 to 35 percent.[65] Under the "Tax Cuts and Jobs Act" of 2017, the rate adjusted to 21 percent.
When did the giant kangaroo rat become endangered?
in the 1980s🚨The giant kangaroo rat (Dipodomys ingens) is an endangered species of heteromyid rodent endemic to California.[2] The giant kangaroo rat, is the largest of over 20 species of kangaroo rats, which are small members of the rodent family, measuring about 15?cm (5.9?in) in length, including its long, tufted tail. It is tan or brown in color. Like other kangaroo rats it has a large head and large eyes, and long, strong hind legs with which helps it hop at high speeds. The giant kangaroo rat has been recently added to the endangered species list due to its habitat being severely reduced. Data was collected on its foraging behavior and social structure. Traps baited kangaroo rats with oats in them for four weeks in the summer. The animals were captured, tagged with tracking devices and set free. Results show that significantly fewer males were captured. This could have been due to the time of year at which the experiment was tested. Females were found to be more social. Studies also showed that the kangaroo rats den is the area in which the animal spends the most time. The giant kangaroo rat lives on dry, sandy grasslands and digs burrows in loose soil. It lives in colonies, and the individuals communicate with each other by drumming their feet on the ground. These foot thumping signals range from single, short thumps to long, drawn out footrolls that can average over 100 drums at 18 drums per second. These audible signals serve both as a warning of approaching danger, as a territorial communication, and to communicate mating status. Kangaroo rats are primarily seed eaters, but also eat green plants and insects. Most giant kangaroo rats gather seeds when they are available and store them for consumption later. The seeds are put into small pits on the surface of the soil and scattered over the home range of the individual. The small pits only the content of the two cheek pouches. In the spring and summer, individuals generally spend less than two hours of the night foraging above ground. They are very territorial and never leave their den for more than 15 minutes per day. The giant kangaroo rat then stores the seeds in a larder for later eating and gives birth to a litter of 1 to 7 babies, with an average of 3 per litter. It communicates with potential mates by sandbathing, where the giant kangaroo rat rubs its sides in sand, leaving behind a scent to attract mates. They live for only 2ÿ4 years. This species was declared endangered on both the federal and California state levels in the 1980s. It inhabits less than a mere 2% of its original range and can now be found only in isolated areas west of the San Joaquin Valley, including the Carrizo Plain, the Elkhorn Plain, and the Kettleman Hills. The giant kangaroo rat, like many other rodent species, lost much of its habitat as the Central Valley fell under agricultural use. Much information still needs to be obtained regarding their basic biology and compatibility with various land uses before clear directives can be made. Besides some projects currently underway in the Carrizo Plain National Monument, studies need to be conducted on populations whose range overlaps with private lands. Recovery of the giant kangaroo rat can be achieved when the three largest populations in eastern Kern County, Carrizo Plain Natural Area, and the Panoche Region along with the populations in the Kettleman Hills, San Juan Creek Valley and Cuyama Valley are protected and managed appropriately.[3] The mating of the giant kangaroo rat is seasonal. During the summer, male rats go out of their normal territories and mate with neighboring female rats. During the winter, the males stay in their original burrow. Endangered Dipodomys ingens populations have become more dispersed and less numerous over time. This can have major side effects to the genetic diversity of the species. D. ingens populations only cover about 3% of the territory they historically occupied. Agricultural development has severely impacted the habitats of this rodent, and restricted it to several small isolated areas. Because of this, D. ingens is at risk for genetic drift and inbreeding within smaller populations. D. ingens lives in metapopulation structures due to their habitats being taken over by humans. They are divided into several small remnant populations that are unable to disperse over larger areas because of topographical limitations. This is a larger problem for northern subpopulations of than those in the south. D. ingens is believed to be polygynous (one male, multiple females) but a common ratio between male and female partners has not yet been found. The study showed that translocation was a successful method for increasing diversity and population size of D. ingens.
How big is washington state in square miles?
71,362 square miles🚨

Released under the MIT License.

has loaded