Appearance
🎉Q&A Life🥳
1 kilohertz is equal to how many hertz?
103 Hz🚨
The hertz (symbol: Hz) is the derived unit of frequency in the International System of Units (SI) and is defined as one cycle per second.[1] It is named for Heinrich Rudolf Hertz, the first person to provide conclusive proof of the existence of electromagnetic waves. Hertz are commonly expressed in multiples: kilohertz (103 Hz, kHz), megahertz (106 Hz, MHz), gigahertz (109 Hz, GHz), terahertz (1012 Hz, THz), petahertz (1015 Hz, PHz), and exahertz (1018 Hz, EHz).
Some of the unit's most common uses are in the description of sine waves and musical tones, particularly those used in radio- and audio-related applications. It is also used to describe the speeds at which computers and other electronics are driven.
The hertz is defined as one cycle per second. The International Committee for Weights and Measures defined the second as "the duration of 9?192?631?770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom"[2][3] and then adds: "It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly 9?192?631?770 hertz, (hfs Cs) = 9?192?631?770 Hz." The dimension of the unit hertz is 1/time (1/T). Expressed in base SI units it is 1/second (1/s).
In English, "hertz" is also used as the plural form.[4] As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, 103?Hz), MHz (megahertz, 106?Hz), GHz (gigahertz, 109?Hz) and THz (terahertz, 1012?Hz). One hertz simply means "one cycle per second" (typically that which is being counted is a complete cycle); 100?Hz means "one hundred cycles per second", and so on. The unit may be applied to any periodic eventfor example, a clock might be said to tick at 1?Hz, or a human heart might be said to beat at 1.2?Hz. The occurrence rate of aperiodic or stochastic events is expressed in reciprocal second or inverse second (1/s or s?1) in general or, in the specific case of radioactive decay, in becquerels.[5] Whereas 1?Hz is 1 cycle per second, 1 Bq is 1 aperiodic radionuclide event per second.
Even though angular velocity, angular frequency and the unit hertz all have the dimension 1/s, angular velocity and angular frequency are not expressed in hertz,[6] but rather in an appropriate angular unit such as radians per second. Thus a disc rotating at 60 revolutions per minute (rpm) is said to be rotating at either 2?rad/s or 1?Hz, where the former measures the angular velocity and the latter reflects the number of complete revolutions per second. The conversion between a frequency f measured in hertz and an angular velocity measured in radians per second is
This SI unit is named after Heinrich Hertz. As with every International System of Units (SI) unit named for a person, the first letter of its symbol is upper case (Hz). However, when an SI unit is spelled out in English, it is treated as a common noun and should always begin with a lower case letter (hertz)except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case.
The hertz is named after the German physicist Heinrich Hertz (1857ÿ1894), who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission (IEC) in 1930.[7] It was adopted by the General Conference on Weights and Measures (CGPM) (Confrence gnrale des poids et mesures) in 1960, replacing the previous name for the unit, cycles per second (cps), along with its related multiples, primarily kilocycles per second (kc/s) and megacycles per second (Mc/s), and occasionally kilomegacycles per second (kMc/s). The term cycles per second was largely replaced by hertz by the 1970s. One hobby magazine, Electronics Illustrated, declared their intention to stick with the traditional kc., Mc., etc. units.[8]
Sound is a traveling longitudinal wave which is an oscillation of pressure. Humans perceive frequency of sound waves as pitch. Each musical note corresponds to a particular frequency which can be measured in hertz. An infant's ear is able to perceive frequencies ranging from 20?Hz to 20,000?Hz; the average adult human can hear sounds between 20?Hz and 16,000?Hz.[9] The range of ultrasound, infrasound and other physical vibrations such as molecular and atomic vibrations extends from a few femtohertz[10] into the terahertz range[11] and beyond.
Electromagnetic radiation is often described by its frequencythe number of oscillations of the perpendicular electric and magnetic fields per secondexpressed in hertz.
Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). Light is electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens (infrared) to thousands (ultraviolet) of terahertz. Electromagnetic radiation with frequencies in the low terahertz range (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light) is often called terahertz radiation. Even higher frequencies exist, such as that of gamma rays, which can be measured in exahertz (EHz). (For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of their wavelengths or photon energies: for a more detailed treatment of this and the above frequency ranges, see electromagnetic spectrum.)
In computers, most central processing units (CPU) are labeled in terms of their clock rate expressed in megahertz (106 Hz) or gigahertz (109 Hz). This specification refers to the frequency of the CPU's master clock signal. This signal is a square wave, which is an electrical voltage that switches between low and high logic values at regular intervals. As the hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, many experts have criticized this approach, which they claim is an easily manipulable benchmark. Some processors use multiple clock periods to perform a single operation, while others can perform multiple operations in a single cycle.[12] For personal computers, CPU clock speeds have ranged from approximately 1?MHz in the late 1970s (Atari, Commodore, Apple computers) to up to 6?GHz in IBM POWER microprocessors.
Various computer buses, such as the front-side bus connecting the CPU and northbridge, also operate at various frequencies in the megahertz range.
Higher frequencies than the International System of Units provides prefixes for are believed to occur naturally in the frequencies of the quantum-mechanical vibrations of high-energy, or, equivalently, massive particles, although these are not directly observable and must be inferred from their interactions with other phenomena. By convention, these are typically not expressed in hertz, but in terms of the equivalent quantum energy, which is proportional to the frequency by the factor of Planck's constant.
How big is a 1 18 scale model?
1/18th the size🚨1:18 scale diecast replicas are 1/18th the size of the real vehicle. Most popular in this category are 1:18 scale automobile replicas ÿ usually made out of zinc alloy (zamac) with plastic parts. "1:18 scale" is the colloquial reference to this class of toy or replica.
Virtually all 1:18 scale models produced in recent years have opening doors, hoods, and trunks along with functional steering wheels which turn the front wheels. Tires are often mounted on workable 'springy' suspension systems. Normally the hood / bonnet lifts to reveal a detailed and accurate engine bay (whether this is a separate cast piece or simply a portion of the cast and painted body located between the fenders).
Higher end models are equipped with genuine leather interiors, accurate engine detail, operational sunroofs, movable windshield wipers, adjustable seats, operational gear levers and other realistic accessories. Most models are approximately 11 inches (280?mm) long by 5 inches (130?mm) wide by 4 inches (100?mm) tall, depending on what vehicle is being represented. Such detail is common to 1:18 scales and larger. Typically, and according to local law, companies that produce model cars will have licensing arrangements with real car manufacturers to make replicas of their cars, both in current production or of discontinued models.
How 1:18 scale became a standard in diecast, especially during the 1990s, is somewhat of a question, but some of the first 1:18 scale cars appeared made in tin in the United States and Japan after World War II. These, however, were not precise in detail or proportion, but became popular in the late 1950s and early 1960s. Before World War II, some vehicle appeared in this size. Also rather by chance, other manufacturers, like Marx in the 1960s and 1970s simply made 1:18 scale large plastic toys. Plastic models in the United States, though, normally were produced in 1:25 scale.
The first zinc alloy metal cars in this scale (and also 1:24 scale) from European Manufacturers appeared around 1970, made by the likes of German Schuco Modell, Polistil, and Gama Toys. Pocher, the Italian kit maker, even manufactured kits in a large 1:8 scale. A review of models by Consumer Reports in 1979 discussed American plastic and European diecast metal models in 1:25 and 1:24 scales, but did not once mention 1:18 scale, showing that it had not yet come into marketing popularity (Consumer Reports 1979). European model makers like Schuco (which was later revived), Gama and Marklin went defunct and the market for 1:18 scale grew steadily during the mid-1980s, mainly with the likes of Bburago, Polistil (both Italian companies mass-manufacturing models in Italy) and then, later, the Asian Maisto which, arguably became the principle manufacturer of this scale worldwide.
Throughout the 1990s, the number of different models in this scale increased exponentially as Chinese production cut manufacturing costs. Models could be sold for anywhere from $10.00 to $25.00 in the United States. By about 2000, it appeared that 1:18 scale had dominated other scales in marketing (except the diminutive and ever popular Hot Wheels) ÿ as nearly whole rows in Toys-R-Us could be seen packed with this scale of model. At this time, many new companies flooded the 1:18 scale market. Ertl and Revell sold a limited number of diecast cars ÿ mostly older American models (although Revell Germany made a number of diecast German and American models). Ertl's main 1:18 scale line was called "American Muscle". Other manufacturers included Anson, Yat Ming, Sun Star, Mira, and UT Models. Often, cars featured in collectible car magazines (such as Collectible Automobile) were soon the subjects of 1:18 diecast articles.
During the early 2000s, the quality and accuracy of models improved dramatically, but price went up and they were sold in more upscale stores, dealerships, and through on-line mail order. Around 2005, "premium" manufacturers including Automodelli, Highway 61, GMP, AUTOart, and Lane Exact Detail began to offer very high-quality, highly detailed models at higher prices. Today (2017), a trickle-down effect can be seen where many features now found in mainstream, low-priced diecasts were only found before in models costing upwards of $100.00. Engine wiring and plumbing, carpeting in the interior or trunk, detailed instrument panels, seatbelts, and photo-etched details are now common even in models costing under $50.00.
Models are found from a number of retail merchants, such as Wal-Mart and KB Toys (in the United States), and a number of Antique Malls and Centers offer models ranging from $20.00. Many, of course, are available through eBay, etsy or other bidding and diecast sales websites. This size remains popular with collectors. With the popularity of eBay and other hobby Web sites, fewer models of this size seem to be found in physical stores.
Die-cast toy
When did christianity become official religion of rome?
380 AD🚨Nicene Christianity became the state church of the Roman Empire with the Edict of Thessalonica in 380 AD, when Emperor Theodosius I made it the Empire's sole authorized religion.[1][2] The Eastern Orthodox Church, Oriental Orthodoxy, and the Catholic Church each claim to be the historical continuation of this church in its original form, but do not identify with it in the caesaropapist form that it took later. Unlike Constantine I, who with the Edict of Milan of 313 AD had established tolerance for Christianity without placing it above other religions[3] and whose involvement in matters of the Christian faith extended to convoking councils of bishops who were to determine doctrine and to presiding at their meetings, but not to determining doctrine himself,[4] Theodosius established a single Christian doctrine (specified as that professed by Pope Damasus I of Rome and Pope Peter II of Alexandria) as the Empire's official religion.
Earlier in the 4th century, following the Diocletianic Persecution of 303ÿ313 and the Donatist controversy that arose in consequence, Constantine had convened councils of Christian bishops to define the orthodoxy, or "correct teaching", of the Christian faith, expanding on earlier Christian councils. A series of ecumenical councils met during the 4th and 5th centuries, but Christianity continued to suffer rifts and schisms surrounding the issues of Arianism, Nestorianism, and Miaphysitism. In the 5th century the Western Empire decayed as a polity: invaders sacked Rome in 410 and in 455, and Odoacer, an Arian barbarian warlord, forced Romulus Augustus, the last nominal Western Emperor, to abdicate in 476. However, apart from the aforementioned schisms, the church as an institution persisted in communion, if not without tension, between the east and west. In the 6th century the Byzantine armies of the Eastern Roman Emperor Justinian I recovered Italy and other sections of the western Mediterranean shore. The Eastern Roman Empire soon lost most of these gains, but it held Rome, as part of the Exarchate of Ravenna, until 751, a period known in church history as the Byzantine Papacy. The Muslim conquests of the 7th century would begin a process of converting most of the then-Christian world in West Asia and North Africa to Islam, severely restricting the reach both of the Byzantine Empire and of its church. Missionary activity directed from Constantinople, the Byzantine capital, did not lead to a lasting expansion of the formal power of the Empire's state church, since areas outside the empire's political and military control set up their own distinct state churches, as in the case of Bulgaria in 919.
Justinian I, who became emperor in Constantinople in 527, established the bishops of Rome, Constantinople, Alexandria, Antioch, and Jerusalem ÿ referred to[by whom?] as the Pentarchy ÿ as the leadership of the Imperial church and gave each bishop the title of "Patriarch". However, Justinian saw these bishops as under his tutelage: according to his arrangement, "the Emperor was the head of the Church in the sense that he had the right and duty of regulating by his laws the minutest details of worship and discipline, and also of dictating the theological opinions to be held in the Church".[5][6] However, by Justinian's day, the churches that now form Oriental Orthodoxy had already seceded from the Imperial state church, while in the west Christianity was mostly subject to the laws and customs of nations that owed no allegiance to the Emperor in Constantinople.[7] While eastern-born popes who were appointed or at least confirmed by the Eastern Emperor continued to be loyal to him as their political lord, they refused to accept his authority in religious matters,[8] or the authority of such a council as the imperially convoked Council of Hieria of 754. Pope Gregory III (731-741) became the last Bishop of Rome to ask the Byzantine ruler to ratify his election.[9][10] By then, the Roman Empire's state church as originally conceived had ceased to exist.[11] In the East, only the largest fragment of the Christian church was under the Emperor's control, and with the crowning of Charlemagne on 25 December 800 AD as Imperator Romanorum by the latter's ally, Pope Leo III, the de facto political split between east and west became irrevocable. Spiritually, the Chalcedonian Church, as a communion broader than the imperial state church, persisted as a unified entity, at least in theory, until the Great Schism and its formal division with the mutual excommunication in 1054 of Rome and Constantinople. Where the Emperor's power remained, the state church developed in a caesaropapist form,[12] although as the Byzantine Empire lost most of its territory to Islam, increasingly the members of the church lived outside the Byzantine state. The Eastern Roman Empire finally collapsed with the Fall of Constantinople to the Islamic Ottoman Turks in 1453.
Western missionary activities created a communion of churches that extended beyond the empire, the beginnings of which predated the establishment of the state church.[citation needed] The obliteration of the Empire's boundaries by Germanic peoples and an outburst of missionary activity among these peoples, who had no direct links with the Eastern Roman Empire, and among Pictic and Celtic peoples who had never been part of the Roman Empire, fostered the idea of a universal church free from association with a particular state.[13] On the contrary, "in the East Roman or Byzantine view, when the Roman Empire became Christian, the perfect world order willed by God had been achieved: one universal empire was sovereign, and coterminous with it was the one universal church"; and the state church came, by the time of the demise of the Byzantine Empire in 1453, to merge psychologically with it to the extent that its bishops had difficulty in thinking of Christianity without an emperor.[14][15]
Modern authors refer to this state church in a variety of ways: as the catholic church, the orthodox church, the imperial church, the imperial Roman church, or the Byzantine church, although some of these terms are also used for wider communions extending outside the Roman Empire.[16]
The legacy of the idea of a universal church polity carries on, directly or indirectly, in today's Catholic Church and Eastern Orthodox Church, as well as in others, such as the Anglican Communion.
Before the end of the 1st century, the Roman authorities recognized Christianity as a separate religion from Judaism. The distinction, perhaps already made in practice at the time of the Great Fire of Rome in the year 64, was given official status by the emperor Nerva around the year 98 by granting Christians exemption from paying the Fiscus Iudaicus, the annual tax upon the Jews. Pliny the Younger, when propraetor in Bithynia in 103, assumes in his letters to Trajan that because Christians do not pay the tax, they are not Jews.[17][18][19]
Since paying taxes had been one of the ways that Jews demonstrated their goodwill and loyalty toward the Empire, Christians had to negotiate their own alternatives to participating in the imperial cult. Their refusal to worship the Roman gods or to pay homage to the emperor as divine resulted at times in persecution and martyrdom.[17][18][19] Church Father Tertullian, for instance, attempted to argue that Christianity was not inherently treasonous, and that Christians could offer their own form of prayer for the well-being of the emperor.[20]
Christianity spread especially in the eastern parts of the Empire and beyond its border; in the west it was at first relatively limited, but significant Christian communities emerged in Rome, Carthage, and other urban centers, becoming by the end of the 3rd century, the dominant faith in some of them. Christians accounted for approximately 10% of the Roman population by 300, according to some estimates.[21] According to Will Durant, the Christian Church prevailed over paganism because it offered a much more attractive doctrine and because the church leaders addressed human needs better than their rivals.[22]
In 301, the Kingdom of Armenia, which Rome considered de jure a client kingdom though yielded de facto to the Parthians (its ruling dynasty was of Parthian extraction),[23] became the first nation to adopt Christianity as its state church.
In 311, the dying Emperor Galerius ended the Diocletianic Persecution that he is reputed to have instigated, and in 313, Emperor Constantine issued the Edict of Milan, granting to Christians and others "the right of open and free observance of their worship".[28]
Constantine began to utilize Christian symbols such as the Chi-Rho early in his reign but still encouraged traditional Roman religious practices including sun worship. In 330, Constantine established the city of Constantinople as the new capital of the Roman Empire. The city would gradually come to be seen as the intellectual and cultural center of the Christian world.[29]
Over the course of the 4th century the Christian body became consumed by debates surrounding orthodoxy, i.e. which religious doctrines are the correct ones. In the early 4th century, a group in North Africa, later called Donatists, who believed in a very rigid interpretation of Christianity that excluded many who had abandoned the faith during the Diocletianic persecution, created a crisis in the western Empire.[30]
A church synod, or council, was held in Rome in 313, followed by another in Arles in 314, the latter presided over by Constantine while still a junior emperor (see Tetrarchy). These synods ruled that the Donatist faith was heresy and, when the Donatists refused to recant, Constantine launched the first campaign of persecution by Christians against Christians, and began imperial involvement in Christian theology. However, during the reign of Emperor Julian the Apostate, the Donatists, who formed the majority party in the Roman province of Africa for 30 years,[31] were given official approval.[32]
Christian scholars and populace within the Empire were increasingly embroiled in debates regarding christology (i.e., regarding the nature of the Christ). Opinions ranged from belief that Jesus was entirely human to belief that he was entirely divine. The most persistent debate was that between the homoousian, or Athanasian, view (the Father and the Son are one and the same, eternal), which was adopted at the council meeting that Constantine called at Nicaea in 325, and the homoiousian, or Arian, view (the Father and the Son are similar, but the Father is greater than the Son). Emperors thereby became ever more involved with the increasingly divided Church.[33]
Constantine was of divided opinions (even as to being Christian), but he largely backed the Athanasian side, though he was baptized on his deathbed by the Arian bishop Eusebius of Nicomedia. His successor Constantius II supported a Semi-Arian position. Emperor Julian returned to the traditional (pagan) Roman/Greek religion, quickly quashed by his successor Jovian, a supporter of the Athanasian side.
A Council of Rimini in 359 supported the Arian view. A Council of Constantinople in 360 supported a compromise (see Semi-Arianism). The Council of Constantinople in 381, called by Emperor Theodosius I reasserted the Nicene or Athanasian view and rejected the Arian view. This council further refined the definition of orthodoxy, issuing, according to tradition, the Niceno-Constantinopolitan Creed.
On 27 February of the previous year, Theodosius I established, with the Edict of Thessalonica, the Christianity of the First Council of Nicaea as the official state religion, reserving for its followers the title of Catholic Christians and declaring that those who did not follow the religion taught by Pope Damasus I of Rome and Pope Peter of Alexandria were to be called heretics:[34]
It is our desire that all the various nations which are subject to our Clemency and Moderation, should continue to profess that religion which was delivered to the Romans by the divine Apostle Peter, as it has been preserved by faithful tradition, and which is now professed by the Pontiff Damasus and by Peter, Bishop of Alexandria, a man of apostolic holiness. According to the apostolic teaching and the doctrine of the Gospel, let us believe in the one deity of the Father, the Son and the Holy Spirit, in equal majesty and in a holy Trinity. We authorize the followers of this law to assume the title of Catholic Christians; but as for the others, since, in our judgment they are foolish madmen, we decree that they shall be branded with the ignominious name of heretics, and shall not presume to give to their conventicles the name of churches. They will suffer in the first place the chastisement of the divine condemnation and in the second the punishment of our authority which in accordance with the will of Heaven we shall decide to inflict.
In 391, Theodosius closed all the "pagan" (non-Christian and non-Jewish) temples and formally forbade pagan worship.
At the end of the 4th century the Roman Empire had effectively split into two states although their economies (and the Church, which only then became a state church) were still strongly tied. The two halves of the Empire had always had cultural differences, exemplified in particular by the widespread use of the Greek language in the Eastern Empire and its more limited use in the West (Greek, as well as Latin, was used in the West, but Latin was the spoken vernacular).
By the time the state church of the Empire was established at the end of the 4th century, scholars in the West had largely abandoned Greek in favor of Latin. Even the Church in Rome, where Greek continued to be used in the liturgy longer than in the provinces, abandoned Greek.[35] Jerome's Vulgate had begun to replace the older Latin translations of the Bible.
The 5th century would see further fracturing of the state church of the Roman Empire. Emperor Theodosius?II called two synods in Ephesus, one in 431 and one in 449, the first of which condemned the teachings of Patriarch of Constantinople Nestorius, while the second supported the teachings of Eutyches against Archbishop Flavian of Constantinople.[36]
Nestorius taught that Christ's divine and human nature were distinct persons, and hence Mary was the mother of Christ but not the mother of God. Eutyches taught on the contrary that there was in Christ only a single nature, different from that of human beings in general. The First Council of Ephesus rejected Nestorius' view, causing churches centered around the School of Edessa, a city at the edge of the empire, to break with the imperial church (see Nestorian schism).[36]
Persecuted within the Roman Empire, many Nestorians fled to Persia and joined the Sassanid Church (the future Church of the East). The Second Council of Ephesus upheld the view of Eutyches, but was overturned two years later by the Council of Chalcedon, called by Emperor Marcian. Rejection of the Council of Chalcedon led to the exodus from the state church of the majority of Christians in Egypt and many in the Levant, who preferred miaphysite theology.[36]
Thus, in addition to losing all the western empire, the state church suffered a significant diminishment even in the east within a century of its setting up. Those who upheld the Council of Chalcedon became known in Syriac as Melkites, the imperial church, followers of the emperor (in Syriac, malka).[37] This schism resulted in an independent communion of churches, including the Egyptian, Syrian, Ethiopian and Armenian churches, that is today known as Oriental Orthodoxy.[38] In spite of these schisms, however, the imperial church still represented the majority of Christians within the by now already diminished Roman Empire.[39]
In the 5th century, the Western Empire rapidly decayed and by the end of the century was no more. Within a few decades, Germanic tribes, particularly the Goths and Vandals, conquered the western provinces. Rome was sacked in 410 and 455, and was to be sacked again in the following century in 546.[27]
By 476 the Germanic chieftain Odoacer had conquered Italy and deposed the last western emperor, Romulus Augustus, though he nominally submitted to the authority of Constantinople. The Arian Germanic tribes established their own systems of churches and bishops in the western provinces but were generally tolerant of the population who chose to remain in communion with the imperial church.[27]
In 533 Roman Emperor Justinian in Constantinople launched a military campaign to reclaim the western provinces from the Arian Germans, starting with North Africa and proceeding to Italy. His success in recapturing much of the western Mediterranean was temporary. The empire soon lost most of these gains, but held Rome, as part of the Exarchate of Ravenna, until 751.
Justinian definitively established Caesaropapism,[11] believing "he had the right and duty of regulating by his laws the minutest details of worship and discipline, and also of dictating the theological opinions to be held in the Church".[5] According to the entry in Liddell & Scott, the term orthodox first occurs in the Codex Justinianus: "We direct that all Catholic churches, throughout the entire world, shall be placed under the control of the orthodox bishops who have embraced the Nicene Creed."[40]
By the end of the 6th century the Church within the Empire had become firmly tied with the imperial government,[41] while in the west Christianity was mostly subject to the laws and customs of nations that owed no allegiance to the emperor.[7]
Emperor Justinian I assigned to five sees, those of Rome, Constantinople, Alexandria, Antioch and Jerusalem, a superior ecclesial authority that covered the whole of his empire. The First Council of Nicaea in 325 reaffirmed that the bishop of a provincial capital, the metropolitan bishop, had a certain authority over the bishops of the province.[42] But it also recognized the existing supra-metropolitan authority of the sees of Rome, Alexandria and Antioch,[43] and granted special recognition to Jerusalem.[44][45][46]
Constantinople was added at the First Council of Constantinople (381)[47] and given authority initially only over Thrace. By a canon of contested validity,[48] the Council of Chalcedon (451) placed Asia and Pontus,[49] which together made up Anatolia, under Constantinople, although their autonomy had been recognized at the council of 381.[50][51]
Rome never recognized this pentarchy of five sees as constituting the leadership of the state church. It maintained that, in accordance with the First Council of Nicaea, only the three "Petrine" sees of Rome, Alexandria and Antioch had a real patriarchal function.[52] The canons of the Quinisext Council of 692, which gave ecclesiastical sanction to Justinian's decree, were also never fully accepted by the Western Church.[53]
Muslim conquests of the territories of the patriarchates of Alexandria, Antioch and Jerusalem, most of whose Christians were in any case lost to the imperial state church since the aftermath of the Council of Chalcedon, left in effect only two patriarchates, those of Rome and Constantinople.[54] Then in 740, Emperor Leo the Isaurian reacted to papal resistance to his iconoclast policy by transferring from the jurisdiction of Rome to that of Constantinople all but a minute portion of the then existing empire.[55]
The Patriarch of Constantinople had already adopted the title of "ecumenical patriarch", indicating what he saw as his position in the oikoumene, the Christian world ideally headed by the emperor and the patriarch of the emperor's capital.[56][57] Also under the influence of the imperial model of governance of the state church, in which "the emperor becomes the actual executive organ of the universal Church",[58] the pentarchy model of governance of the state church regressed to a monarchy of the Patriarch of Constantinople.[58][59]
The Rashidun conquests began to expand the sway of Islam beyond Arabia in the 7th century, first clashing with the Roman Empire in 634.
That empire and the Sassanid Persian Empire were at that time crippled by decades of war between them.
By the late 8th century the Umayyad caliphate had conquered all of Persia and much of the Byzantine territory including Egypt, Palestine, and Syria.
Suddenly much of the Christian world was under Muslim rule. Over the coming centuries the successive Muslim states became some of the most powerful in the Mediterranean world.
Though the state church of the Roman Empire claimed religious authority over Christians in Egypt and the Levant, in reality the majority of Christians in these regions were by then miaphysites and members of other sects that had long been persecuted by Constantinople. The new Muslim rulers, in contrast, offered religious tolerance to Christians of all sects. Additionally subjects of the Muslim Empire could be accepted as Muslims simply by declaring a belief in a single deity and reverence for Muhammad (see shahada). As a result, the peoples of Egypt, Palestine and Syria largely accepted their new rulers and many declared themselves Muslims within a few generations. Muslim incursions later found success in parts of Europe, particularly Spain (see Al-Andalus).[60]
During the 9th century, the Emperor in Constantinople encouraged missionary expeditions to nearby nations including the Muslim caliphate, and the Turkic Khazars.[citation needed] In 862 he sent Saints Cyril and Methodius to Slavic Great Moravia. By then most of the Slavic population of Bulgaria was Christian and Tsar Boris I himself was baptized in 864. Serbia was accounted Christian by about 870.[61] In early 867 Patriarch Photios I of Constantinople wrote that Christianity was accepted by the Kievan Rus', which however was definitively Christianized only at the close of the following century.
Of these, the Church in Great Moravia chose immediately to link with Rome, not Constantinople: the missionaries sent there sided with the Pope during the Photian Schism (863ÿ867).[62] After decisive victories over the Byzantines at Acheloos and Katasyrtai, Bulgaria declared its Church autocephalous and elevated it to the rank of Patriarchate, an autonomy recognized in 927 by Constantinople,[63][64] but abolished by Emperor Basil II Bulgaroktonos (the Bulgar-Slayer) after his 1018 conquest of Bulgaria.
In Serbia, which became an independent kingdom in the early 13th century, Stephen Uro? IV Du?an, after conquering a large part of Byzantine territory in Europe and assuming the title of Tsar, raised the Serbian archbishop to the rank of patriarch in 1346, a rank maintained until after the fall of the Byzantine Empire to the Turks. No Byzantine emperor ever ruled Russian Christianity.
Expansion of the Church in western and northern Europe began much earlier, with the conversion of the Irish in the 5th century, the Franks at the end of the same century, the Arian Visigoths in Spain soon afterwards, and the English at the end of the 6th century. By the time the Byzantine missions to central and eastern Europe began, Christian western Europe, in spite of losing most of Spain to Islam, encompassed Germany and part of Scandinavia, and, apart from the south of Italy, was independent of the Byzantine Empire and had been almost entirely so for centuries.
This situation fostered the idea of a universal church linked to no one particular state[13] and of which the state church of the Roman Empire was only part. Long before the Byzantine Empire came to an end, Poland also, Hungary and other central European peoples were part of a Church that in no way saw itself as the empire's state church and that, with the East-West Schism, had even ceased to be in communion with it.
With the defeat and death in 751 of the last Exarch of Ravenna and the end of the Exarchate, Rome ceased to be part of the Byzantine Empire. Forced to seek protection elsewhere,[65] the Popes turned to the Franks and, with the coronation of Charlemagne by Pope Leo III on 25 December 800, transferred their political allegiance to a rival Roman Emperor. More clearly than before, the church in the west, while remaining in communion with the state church of the Byzantine Empire, was not part of it. Disputes between the see of Rome, which claimed authority over all other sees, and that of Constantinople, which was now without rival in the empire, culminated perhaps inevitably[66] in mutual excommunications in 1054.
Communion with Constantinople was broken off by European Christians with the exception of those ruled by the empire (including the Bulgarians and Serbs) and of the fledgling Kievan or Russian Church, then a metropolitanate of the patriarchate of Constantinople. This church became independent only in 1448, just five years before the extinction of the empire,[67] after which the Turkish authorities included all their Orthodox Christian subjects of whatever ethnicity in a single millet headed by the Patriarch of Constantinople.
The Westerners who set up Crusader states in Greece and the Middle East appointed Latin (Western) patriarchs and other hierarchs, thus giving concrete reality and permanence to the schism.[68][69][70] Efforts were made in 1274 (Second Council of Lyon) and 1439 (Council of Florence) to restore communion between East and West, but the agreements reached by the participating eastern delegations and by the Emperor were rejected by the vast majority of Byzantine Christians.
In the East, the idea that the Byzantine emperor was the head of Christians everywhere persisted among churchmen as long as the empire existed, even when its actual territory was reduced to very little. In 1393, only 60 years before the fall of the capital, Patriarch Antony IV of Constantinople wrote to Basil I of Muscovy defending the liturgical commemoration in Russian churches of the Byzantine emperor on the grounds that he was "emperor (ϫ??) and autokrator of the Romans, that is of all Christians".[71] According to Patriarch Antony, "it is not possible among Christians to have a Church and not to have an emperor. For the empire and the Church have great unity and commonality, and it is not possible to separate them",[72][73][74] and "the holy emperor is not like the rulers and governors of other regions".[74][75]
Following the schism between the Eastern and Western Churches, various emperors sought at times but without success to reunite the Church, invoking the notion of Christian unity between East and West in an attempt to obtain assistance from the Pope and Western Europe against the Muslims who were gradually conquering the empire's territory. But the period of the Western Crusades against the Muslims had passed before even the first of the two reunion councils was held.
Even when persecuted by the emperor, the Eastern Church, George Pachymeres said, "counted the days until they should be rid not of their emperor (for they could no more live without an emperor than a body without a heart), but of their current misfortunes".[76] The state church had come to merge psychologically in the minds of the Eastern bishops with the empire to such an extent that they had difficulty in thinking of Christianity without an emperor.[14]
In Western Europe, on the other hand, the idea of a universal church linked to the Emperor of Constantinople was replaced by that in which the Roman see was supreme.[77] "Membership in a universal church replaced citizenship in a universal empire. Across Europe, from Italy to Ireland, a new society centered on Christianity was forming."[78]
The Western Church came to emphasize the term Catholic in its identity, an assertion of universality, while the Eastern Church came to emphasize the term Orthodox in its identity, an assertion of holding to the true teachings of Jesus. Both churches claim to be the unique continuation of the previously united Chalcedonian Church, whose core doctrinal formulations have been retained also by many of the churches that emerged from the Protestant Reformation, including Lutheranism and Anglicanism.
Cite error: A list-defined reference with group name "lower-alpha" is not used in the content (see the help page).
When did salt and pepper became a pair?
seventeenth-century🚨Salt and pepper is the common name for edible salt and black pepper, which are traditionally paired on dining tables where European-style food is eaten so as to allow for the seasoning of food. They may be considered condiments or seasonings; salt is a mineral and black pepper is a spice.
The pairing of salt and pepper as table accessories dates to seventeenth-century French cuisine, which considered pepper (distinct from herbs such as fines herbes) the only spice that did not overpower the true taste of food.[1] They are typically found in a set of salt and pepper shakers, often a matched set. Salt and pepper are typically maintained in separate shakers on the table, but may be mixed in the kitchen. Some food writers, like Sara Dickerman, have argued that in modern cookery, a new spice could be used in place of black pepper.
In Hungary, paprika may replace pepper on the dinner table.
Who was president when world war 2 ended?
Harry S. Truman🚨
What was the first call of duty game ever made?
Call of Duty🚨Call of Duty is a first-person shooter video game franchise. The series began on Microsoft Windows, and later expanded to consoles and handhelds. Several spin-off games have been released. The earlier games in the series are set primarily in World War II, but later games like Call of Duty 4: Modern Warfare are set in modern times or in futuristic settings. The most recent game, Call of Duty: WWII, was released on November 3, 2017. An upcoming title, Call of Duty: Black Ops 4, is in development and to be released on October 12, 2018.
The Call of Duty games are published and owned by Activision. While Infinity Ward is still a developer, Treyarch and Sledgehammer Games also develop several of the titles with the release of the studios' games alternating with each other. Some games have been developed by Gray Matter Interactive, Nokia, Exakt Entertainment, Spark Unlimited, Amaze Entertainment, n-Space, Aspyr, Rebellion Developments, Ideaworks Game Studio, and nStigate Games. The games use a variety of engines, including the id Tech 3, the Treyarch NGL, and the IW engine.
As of February 2016, the Call of Duty series has sold over 250 million copies.[1] Sales of all Call of Duty games topped US$15 billion.[1]
Other products in the franchise include a line of action figures designed by Plan-B Toys, a card game created by Upper Deck Company, Mega bloks sets by Mega Brands, and a comic book mini-series published by WildStorm Productions.
Call of Duty is a video game based on the Quake III Arena engine (id Tech 3), and was released on October 29, 2003. The game was developed by Infinity Ward and published by Activision. The game simulates the infantry and combined arms warfare of World War II.[2] An expansion pack, Call of Duty: United Offensive, was developed by Gray Matter Interactive with contributions from Pi Studios and produced by Activision. The game follows American and British paratroopers and the Red Army. The Mac OS X version of the game was ported by Aspyr Media. In late 2004, the N-Gage version was developed by Nokia and published by Activision. Other versions were released for PC, including Collector's Edition (with soundtrack and strategy guide), Game of the Year Edition (includes game updates), and the Deluxe Edition (which contains the United Offensive expansion and soundtrack; in Europe the soundtrack was not included). On September 22, 2006, Call of Duty, United Offensive, and Call of Duty 2 were released together as Call of Duty: War Chest for PC.[3] Since November 12, 2007, Call of Duty games have been available for purchase via Valve's content delivery platform Steam.[4]
Call of Duty 2 is a first-person shooter video game and the sequel to Call of Duty. It was developed by Infinity Ward and published by Activision. The game is set during World War II and is experienced through the perspectives of soldiers in the Red Army, British Army, and United States Army. It was released on October 25, 2005 for Microsoft Windows, November 15, 2005 for the Xbox 360, and June 13, 2006, for Mac OS X. Other versions were made for mobile phones, Pocket PCs, and smartphones.
Call of Duty 3 is a World War II first-person shooter and the third installment in the Call of Duty video game series. Released on November 7, 2006, the game was developed by Treyarch, and was the first major installment in the Call of Duty series not to be developed by Infinity Ward. It was also the first not to be released on the PC platform. It was released on the PlayStation 2, PlayStation 3, Wii, Xbox, and Xbox 360.[5] Call of Duty 3 follows the American, Canadian, British, and Polish armies as well as the French Resistance after D-Day in the Falaise Gap.[citation needed]
Call of Duty: WWII is the fourteenth game in the series and was developed by Sledgehammer Games.[6] It was released worldwide on November 3, 2017, for Microsoft Windows, PlayStation 4 and Xbox One.[7] The game is set in the European theatre, and is centered around a squad in the 1st Infantry Division, following their battles on the Western Front, and set mainly in the historical events of Operation Overlord.
Call of Duty 4: Modern Warfare is the fourth installment of the main series, and was developed by Infinity Ward. It is the first game in the series not to be set during World War II, but set in the modern day. It is the first to receive a Mature rating from the ESRB, except for the Nintendo DS version, which was rated Teen. The game was released for Microsoft Windows, Nintendo DS, PlayStation 3, and Xbox 360 on November 7, 2007. Download and retail versions for Mac OS X were released by Aspyr in September 2008. As of May 2009, Call of Duty 4: Modern Warfare has sold over 13 million copies.[8]
Call of Duty: Modern Warfare Remastered is a remastered version of Call of Duty 4: Modern Warfare that was released alongside the Legacy Edition, Legacy Pro Edition and Digital Deluxe Edition of Call of Duty: Infinite Warfare on November 4, 2016, for PS4, Xbox One, and PC.[9] It was later released standalone on June 27, 2017, for PS4, and July 27, 2017, for Xbox One and PC.[10] The game was developed by Raven Software and executive produced by Infinity Ward.[11][12]
Call of Duty: Modern Warfare 2 is the sixth installment of the main series. It was developed by Infinity Ward and published by Activision.[13][14] Activision Blizzard officially announced Modern Warfare 2 on February 11, 2009.[15][16] The game was released worldwide on November 10, 2009, for the Xbox 360, PlayStation 3 and Microsoft Windows.[13] A Nintendo DS iteration of the game, titled Call of Duty: Modern Warfare: Mobilized, was released alongside the game and the Wii port of Call of Duty?: Modern Warfare.[17][18] Modern Warfare 2 is the direct sequel to Call of Duty 4 and continues the same storyline, taking place five years after the first game and featuring several returning characters including Captain Price and "Soap" MacTavish.[19]
Call of Duty: Modern Warfare 3 is a first-person shooter video game. It is the eighth installment of the Call of Duty series and the third installment of the Modern Warfare arc. Due to a legal dispute between the game's publisher Activision and the former co-executives of Infinity Ward?ÿ which caused several lay-offs and departures within the company[20]?ÿ Sledgehammer Games assisted in the development of the game, while Raven Software was brought in to make cosmetic changes to the menus of the game.[21] The game was said to have been in development since only two weeks after the release of Call of Duty: Modern Warfare 2.[21] Sledgehammer was aiming for a "bug free" first outing in the Call of Duty franchise, and had also set a goal for Metacritic review scores above 95 percent.[22]
The game continues the story from the point at which it ended in the Call of Duty: Modern Warfare 2 and continues the fictional battle story between the United States and Russia, which evolves into the Third World War between NATO allied nations and Ultra-nationalist Russia (a revolutionary political party idolizing the late days of the Soviet Union).
Call of Duty: World at War, developed by Treyarch, is the fifth installment of the main series. Released after Modern Warfare, it returns to the World War II setting of earlier titles[23], featuring the Pacific theater and Eastern front. The game uses the same proprietary game engine as Call of Duty 4 and was released for the PC, PlayStation 3, Wii, Xbox 360 consoles and the Nintendo DS handheld in North America on November 11, 2008, and November 14, 2008 in Europe. As of June 2009, Call of Duty: World at War has sold over 11 million copies.[24] It acts as a prologue for Treyarch's next game, Black Ops, which is in the same universe, sharing characters and story references.[25][26]
Call of Duty: Black Ops is the seventh installment in the series,[27][28][29] the third developed by Treyarch, and was published by Activision for release on November 9, 2010. It is the first game in the series to take place during the Cold War and also takes place partially in the Vietnam War. It was initially available for Microsoft Windows, Xbox 360, and PlayStation 3 and was later released for the Wii as well as the Nintendo DS.[30]
Call of Duty: Black Ops II is the ninth main installment in the series, developed by Treyarch and published by Activision. The game was first revealed on May 1, 2012.[31][32] It was the first game in the series to feature future warfare technology, and the campaign features multiple branching storylines driven by player choice and multiple endings. It was later released on November 12, 2012.
Call of Duty: Black Ops III is the twelfth main installment in the series, developed by Treyarch and published by Activision. The game was released on November 6, 2015.
Call of Duty: Black Ops 4 is to be the fifteenth main installment in the series. It is being developed by Treyarch and published by Activision. The game will release on October 12, 2018.[33]
Call of Duty: Ghosts is the tenth main installment in the series, and was developed by Infinity Ward with Neversoft and Raven Software. The game was released on November 5, 2013.[34][35]
Call of Duty: Advanced Warfare is the eleventh main installment in the series, developed by Sledgehammer Games with assistance from Raven Software and High Moon Studios. It was released in November 2014.[36]
Call of Duty: Infinite Warfare is the thirteenth main installment in the series, developed by Infinity Ward, and was published by Activision. The game was released on November 4, 2016.[37]
In 2006, Treyarch launched Call of Duty 3, their first official Call of Duty game to the main series. Treyarch and Infinity Ward signed a contract stating that the producer of each upcoming title in the series would alternate between the two companies. In 2010, Sledgehammer Games announced they were working on a main series title for the franchise. This game was postponed in order to help Infinity Ward produce Modern Warfare 3. In 2014, it was confirmed that Sledgehammer Games would produce the 2014 title, Call of Duty: Advanced Warfare, and the studios would begin a three-year rotation.[38][39] Advanced Warfare was followed by Treyarch's Call of Duty: Black Ops III in 2015, Infinity Ward's Call of Duty: Infinite Warfare in 2016, and Sledgehammer Games's Call of Duty: WWII in 2017.
Call of Duty: Finest Hour is the first console installment of Call of Duty, and was released on the GameCube, PlayStation 2, and Xbox. The PlayStation 2 and Xbox versions of the game include an online multiplayer mode which supports up to 32 players. It also includes new game modes.
Call of Duty 2: Big Red One is a spin-off of Call of Duty 2 developed by Treyarch, and based on the American 1st Infantry Division's exploits during World War II. The game was released on GameCube, PlayStation 2, and Xbox.
Call of Duty: World at War ÿ Final Fronts is the PlayStation 2 adaptation of Call of Duty: World at War. Developed by Rebellion Developments, Final Fronts features three campaigns involving the U.S. fighting in the Pacific theater, the Battle of the Bulge, and the British advancing on the Rhine River into Germany.
Call of Duty: The War Collection is a boxed set compilation of Call of Duty 2, Call of Duty 3 and Call of Duty: World at War. It was released for the Xbox 360 on June 1, 2010.[40]
Call of Duty: Roads to Victory is a PSP game which is a portable spin-off of Call of Duty 3.[41]
Call of Duty: Modern Warfare: Mobilized is the Nintendo DS companion game for Modern Warfare 2. Developed by n-Space, the game takes place in the same setting as the main console game, but follows a different storyline and cast of characters. Playing as the S.A.S. and the Marines in campaign mode, both forces are trying to find a nuclear bomb.
Call of Duty: Black Ops is the Nintendo DS companion game for Black Ops. Developed by n-Space, the game takes place in the same setting as the main console game, but follows a different storyline and cast of characters.
Call of Duty: Zombies is a first-person shooter video game developed by Ideaworks Game Studio, and published by Activision for iOS. It is a spin-off of the Call of Duty series, and based on the "Nazi Zombies" mode of Call of Duty: World at War. A sequel for the iPhone and iPod Touch includes Shi No Numa that was originally released on the Xbox 360, PS3, and PC.
Call of Duty: Black Ops: Declassified is a PlayStation Vita Call of Duty game.[42]
Call of Duty: Strike Team is a first and third-person shooter game developed by The Blast Furnace, and published by Activision for iOS. The game is set in 2020 with players tasked with leading a U.S. Joint Special Operations Team after the country "finds themselves in a war with an unknown enemy". The game was released on September 5, 2013.
Call of Duty: Heroes is a real-time strategy game developed by Faceroll Games, and published by Activision for Android and iOS. The game takes a resemblance to Clash of Clans and was released on November 26, 2014.
Call of Duty Online was announced by Activision when the company first stated their interest in an Massively multiplayer online game (MMO) in early 2011. By then, it had been in development for two years. Call of Duty Online is free-to-play for mainland China and is hosted by Tencent. Since Activision had lost the publishing rights to Call of Duty and several other franchises in China due to a legal dispute on most of the Western gaming consoles (Xbox 360, PlayStation 3, and Wii), it had been rumored that it would be Microsoft Windows-exclusive, since PCs hold the dominant share of gamers in mainland China.
Call of Duty: Combined Forces was a proposed concept draft originally intended to be a sequel to Call of Duty: Finest Hour. However, due to multiple legal issues that arose between Spark Unlimited, Electronic Arts, and Activision as well as other production problems, the game's draft and scripts never came to be. The game was projected to cost $10.5 million to produce after Finest Hour was complete. Eventually Activision deemed the pitch as more of an expansion than something entirely new, causing the company to reject the proposal and end their contract with Spark Unlimited shortly after.[43]
Call of Duty: Devil's Brigade was a canceled first-person shooter for the Xbox 360 developed by Underground Entertainment. The game was set in World War II, mainly focusing on the Italian Campaign.[44]
Call of Duty: Vietnam was a third-person shooter set during the Vietnam War. It was in development for at least six to eight months at Sledgehammer Games. The development was stopped because Infinity Ward needed help finishing Call of Duty: Modern Warfare 3 due to the employee firings and departures in 2010.[45][46]
Call of Duty: Roman Wars was a canceled third and first-person video game in the Call of Duty franchise. The game was set in ancient Rome, and allowed players to take control of famous historical figure Julius Caesar, along with "low grunts", and officers of the Tenth Legion. It was eventually canceled, as Activision had uncertainties about branding it as a Call of Duty title.[47]
Modern Warfare 2: Ghost is a six-part comic book mini-series based on Call of Duty: Modern Warfare 2. The storyline focuses on the backstory of the character Ghost. The series is published by WildStorm and the first issue was released on November 10, 2009, alongside the game.[48]
Call of Duty: Zombies[49] is a six-part comic book series published by Dark Horse Comics. The series ties in with the Zombies game mode of the Black Ops subseries developed by Treyarch. The series is co-written by Justin Jordan, Treyarch's Jason Blundell and Craig Houston. The series is illustrated by artist Jonathan Wayshak and colorist Dan Jackson. The cover arts are handled by artist Simon Bisley. The series was first announced by Treyarch on July 2016, with the first issue slated for release in October. After a slight delay, the first issue was released on October 26, 2016. The five other issues were released in the months of 2017: issue #2 released on January 11, 2017; issue #3 released on March 1, 2017; issue #4 released on April 19, 2017; issue #5 released on June 21, 2017; and issue #6 released on August 23, 2017. A paperback edition containing all six issues was released on November 15, 2017.
The Call of Duty Real-Time Card Game was announced by card manufacturer Upper Deck.[50]
In 2004, Activision, in cooperation with the companies Plan-B Toys and Radioactive Clown, released the "Call of Duty: Series 1" line of action figures, which included three American soldiers and three German soldiers from the World War II era.[51] While the American G.I. action figure was made in 2004,[52] Plan-B Toys later discontinued a controversial Nazi SS Guard action figure based on the Nazi Totenkopf officer seen in Call of Duty.[53]
In 2008, McFarlane Toys announced their partnership with Activision to produce action figures for the Call of Duty series. McFarlane Toys' first series of action figures were released in October 2008 and consists of four different figures: Marine with Flamethrower, Marine Infantry, British Special Ops, and Marine with Machine Gun.[54]
Find Makarov is a fan-made film that was well received by Call of Duty publishers, Activision, who contacted We Can Pretend and subsequently produced a second short film, Operation Kingfish.[55]
Find Makarov: Operation Kingfish is a fan-made prequel to Call of Duty: Modern Warfare 2 and was first shown at Call of Duty XP. The video was produced by We Can Pretend, with visual effects by The Junction, and was endorsed by Activision. The video tells the story of how Captain Price ended up in a Russian Gulag set before the events of Modern Warfare 2.
On November 6, 2015, The Hollywood Reporter reported that Activision Blizzard launched a production studio called Activision Blizzard Studios and are planning a live action Call of Duty cinematic universe in 2018 or 2019.[56]
The Call of Duty games were used in eSports, starting in 2006, alongside the game released at the time, Call of Duty 4: Modern Warfare.[57] Over the years, the series has extended with releases such as Call of Duty: World at War, Call of Duty: Modern Warfare 2, Call of Duty: Black Ops, Call of Duty: Modern Warfare 3, Call of Duty: Black Ops II, and Call of Duty: Ghosts. Games are played in leagues like Major League Gaming.
Players can compete in ladders or tournaments. The ladders are divided into several sub ladders such as: the singles ladder, doubles ladder, team ladder (3v3 ÿ 6v6) and hardcore team ladder (3v3 ÿ 6v6). The difference between the regular team ladder and the hardcore team ladder is the in game settings and thus a rule differentiation. Winning ladder matches on a competitive website rewards the user with experience points which add up to give them an overall rank.[58]
The tournaments offered on these websites provide players with the opportunity to win cash prizes and trophies. The trophies are registered and saved on the player's profile if/when they win a tournament and the prize money is deposited into his or her bank account. Call of Duty: Ghosts was the most competitively played game in 2014, with an average of 15,000 teams participating every season.[59]
For the past 6 seasons in competitive Call of Duty, Full Sail University has hosted a prize giveaway, giving $2,500 to the top team each season.[60] The other ladders give out credits and medals registered on players' profiles. Tournaments hosted on the Call of Duty: Ghosts's Arena give cost from 15 to 30 credits, thus averaging at a cost of about $18.75 per tournament. If the player competes with a team, the prize money is divided and an equal cut is given to each player. Other tournaments with substantial prizes are hosted in specific cities and countries for LAN teams.
The biggest Call of Duty tournament hosted was Call of Duty: Experience 2011, a tournament that began when Call of Duty: Modern Warfare 3 was released.[61]
Playing Call of Duty competitively is most popular in Europe and North America, with users who participate in tournaments and ladder matches daily.[62] Competitive players have an average of 500,000 followers on social media.[63]
The Call of Duty Endowment (CODE) is a nonprofit foundation created by Activision Blizzard to help find employment for U.S. military veterans. The first donation, consisting of $125,000, was presented to the Paralyzed Veterans of America.[64]
Co-chairman General James L. Jones is a former U.S. National Security Advisor.[65] Founder Robert Kotick is the CEO of Activision Blizzard. Upon its founding in 2009, the organization announced a commitment to create thousands of career opportunities for veterans, including those returning from the Middle East.[66] Annual awards given by the endowment include the Seal of Distinction, a $30,000 initial grant given to selected veterans service organizations.[67] In November 2014, the endowment launched the Race to 1,000 Jobs campaign to encourage gamers to donate money to and get involved in organizations that provide veterans with services.[68] As of 2015, the Call of Duty Endowment had provided around $12 million in grants to veterans organizations in the United States, which has helped find jobs for 14,700 veterans.[69]
On March 30, 2010, CODE presented 3,000 copies of Call of Duty: Modern Warfare 2, approximately $180,000 in value, to the U.S. Navy. The copies were delivered to over 300 ships and submarines as well as Navy Morale, Welfare, and Recreation facilities worldwide.[70]
When did the library of alexandria burn down?
48 BC🚨The Library of Alexandria was one of the largest and most significant libraries of the ancient world.[1] Famous for having been burned, thus resulting in the loss of many scrolls and books, it has become a symbol of "knowledge and culture destroyed". Although there is a traditional account of the burning of the Library at Alexandria, the library may have suffered several fires or acts of destruction, of varying degrees, over many years. Different cultures may have blamed each other throughout history, or distanced their ancestors from responsibility, and therefore leaving conflicting and inconclusive fragments from ancient sources on the exact details of the destruction. There is little consensus on when books in the actual library were destroyed. Manuscripts were probably burned in stages over eight centuries.[citation needed]
Ancient and modern sources identify four possible occasions for the partial or complete destruction of the Library of Alexandria: Julius Caesar's fire during his civil war in 48 BC; the attack of Aurelian in AD 270ÿ275; the decree of Coptic Christian pope Theophilus of Alexandria in AD 391; and the Muslim conquest of Egypt in (or after) AD 642.[2]
The ancient accounts by Plutarch, Aulus Gellius, Ammianus Marcellinus, and Orosius indicate that troops of Caesar accidentally burned the library down during or after the Siege of Alexandria in 48 BC.[3]
Plutarch's Parallel Lives, written at the end of the 1st or beginning of the 2nd century AD, describes the Siege of Alexandria in which Caesar was forced to burn his own ships:
when the enemy endeavored to cut off his communication by sea, he was forced to divert that danger by setting fire to his own ships, which, after burning the docks, thence spread on and destroyed the great library.
However, the editor of Plutarch's translation notes that the "destruction of the Library can have been only partial", and that it occurred specifically in the Museum built by the first Ptolemy (283 BCE).[4]
In the 2nd century AD, the Roman historian Aulus Gellius wrote in his book Attic Nights that the Library was burned by mistake after the siege when some of Caesars auxiliary soldiers started a fire. Aulus's translator similarly notes that, although auxiliary forces accidentally burned many books while stationed in Alexandria: "By no means all of the Alexandrian Library was destroyed [in 48 BC] and the losses were made good, at least in part, by Antony in 41 BC. A part of the library was burned under Aurelian, in AD 272, and the destruction seems to have been completed in 391."[5]
William Cherf argued that this scenario had all the ingredients of a firestorm which set fire to the docks and then the library, destroying it.[citation needed] This would have occurred in 48 BC, during the fighting between Caesar and Ptolemy XIII. Furthermore, in the 4th century, both the pagan historian Ammianus[6] and the Christian historian Orosius wrote that the Bibliotheca Alexandrina had been destroyed by Caesar's fire. The anonymous author of the Alexandrian Wars wrote that the fires set by Caesar's soldiers to burn the Egyptian navy in the port also burned a store full of papyri located near the port.[7] However, a geographical study of the location of the historical Bibliotheca Alexandrina in the neighborhood of Bruchion suggests that this store cannot have been the Great Library.[8] It is more probable that these historians confused the two Greek words bibliothecas, which means set of books, with bibliotheka, which means library.
Whether the burned books were only some stored books or books found inside the library itself, the Roman stoic philosopher Seneca (c. 4 BC?ÿ AD 65) refers to 40,000 books having been burnt at Alexandria.[9] During his reign of the eastern part of the Empire (40ÿ30 BC), Mark Antony plundered the second largest library in the world (at Pergamon) and presented the collection as a gift to Cleopatra as a replacement for the books lost to Caesar's fire.
Theodore Vrettos describes the damage caused by the fire: "The Roman galleys carrying the Thirty-Seventh Legion from Asia Minor had now reached the Egyptian coast, but because of contrary winds, they were unable to proceed toward Alexandria. At anchor in the harbor off Lochias, the Egyptian fleet posed an additional problem for the Roman ships. However, in a surprise attack, Caesar's soldiers set fire to the Egyptian ships resulting in the flames spreading rapidly and consuming most of the dockyard, including many structures near the palace. This fire resulted in the burning of several thousand books that were housed in one of the buildings. From this incident, historians mistakenly assumed that the Great Library of Alexandria had been destroyed, but the Library was nowhere near the docks... The most immediate damage occurred in the area around the docks, in shipyards, arsenals, and warehouses in which grain and books were stored. Some 40,000 book scrolls were destroyed in the fire. Not at all connected with the Great Library, they were account books and ledgers containing records of Alexandria's export goods bound for Rome and other cities throughout the world."[10]
The Royal Alexandrian Library was not the only library located in the city. Down through Queen Cleopatra of the first century BCE, the Ptolemaic dynasty, stimulated Alexandria's libraries to flourish.[11] There were at least two other libraries in Alexandria: the library of the Serapeum temple and the library of the Cesarion Temple. The continuity of literary and scientific life in Alexandria after the destruction of the Royal Library, as well as the city's status as the worlds center for sciences and literature between the 1st and the 6th centuries AD, depended to a large extent on the presence of these two libraries. Historical records indicate that the Royal Library was private (used by the royal family as well as scientists and researchers), but the libraries of the Serapeum and Cesarion temples were public and accessible to the people.[12]
Furthermore, while the Royal Library was founded by Ptolemy II in the royal quarters of Bruchion near the palaces and the royal gardens, it was his son Ptolemy III who founded the Serapeum temple and its adjoined "Daughter" Library in the popular quarters of Rhakotis.
Florus and Lucan note that the flames Caesar set only burned the fleet and some "houses near the sea".
The next account is from Strabo's Geographica in 28 BC,[13] which does not mention the library specifically, but does mention that he could not find a city map which he had seen when on an earlier trip to Alexandria, before the fire.[disputed ÿ discuss] Abaddi uses this account to infer that the library was destroyed to its foundations.[citation needed]
The accuracy of this account is suspect. The adjacent Museum was, according to the same account, fully functional, even though the building next door was completely destroyed. Also, we do know that at this time the daughter Library at the Serapeum was thriving and untouched by the fire. Strabo also confirms the existence of the "Museum." But when he mentions the Sarapeum and Museum, Strabo and other historians are inconsistent in their descriptions of the entire compound or the specific temple buildings. So we may not infer that the library arm had been demolished. Strabo was one of the world's leading geographers[citation needed], but it is possible that since his last visit to the library, the map he was referencing (quite possibly a rare or esoteric map considering his expertise and the vast collection of the library[disputed ÿ discuss]) might have been part of the library that was partially destroyed or simply a victim of a library that didn't have the funds to recopy and preserve its collection[disputed ÿ discuss][citation needed].
Therefore, the Royal Alexandrian Library may have been burned after Strabo's visit to the city (25 BC) but before the beginning of the 2nd century AD when Plutarch wrote. Otherwise Plutarch and later historians would not have mentioned the incident and mistakenly attributed it to Julius Caesar. It is also most probable that the library was destroyed by someone other than Caesar,[clarification needed] although the later generations linked the fire that took place in Alexandria during Caesar's time to the burning of the Bibliotheca.[14] Some researchers believe it most likely that the destruction accompanied the wars between Queen Zenobia and the Roman Emperor Aurelian, in the second half of the 3rd century (see below).[15]
The library seems to have been maintained and continued in existence until its contents were largely lost during the taking of the city by the Emperor Aurelian (270ÿ275), who was suppressing a revolt by Queen Zenobia of Palmyra (ruled Egypt AD 269ÿ274).[16] During the course of the fighting, the areas of the city in which the main library was located were damaged.[17] The smaller library located at the Serapeum survived, but some of its contents may have been taken to Constantinople during the 4th century to adorn the new capital. However, Ammianus Marcellinus, writing around AD 378, seems to speak of the library in the Serapeum temple as a thing of the past and states that many of the Serapeum library's volumes were burnt when Caesar sacked Alexandria. In Book 22.16.12ÿ13, he says:
Besides this there are many lofty temples, and especially one to Serapis, which, although no words can adequately describe it, we may yet say, from its splendid halls supported by pillars, and its beautiful statues and other embellishments, is so superbly decorated, that next to the Capitol, of which the ever-venerable Rome boasts, the whole world has nothing worthier of admiration. In it were libraries of inestimable value; and the concurrent testimony of ancient records affirm that 70,000 volumes, which had been collected by the anxious care of the Ptolemies, were burnt in the Alexandrian war when the city was sacked in the time of Caesar the Dictator.
While Ammianus Marcellinus may be simply reiterating Plutarch's tradition about Caesar's destruction of the library, it is possible that his statement reflects his own empirical knowledge that the Serapeum library collection had either been seriously depleted or was no longer in existence.
Paganism was made illegal by an edict of the Emperor Theodosius I in 391. The temples of Alexandria were closed by Patriarch Pope Theophilus of Alexandria in AD 391.[18]
Socrates of Constantinople provides the following account of the destruction of the temples in Alexandria, in the fifth book of his Historia Ecclesiastica, written around 440:
At the solicitation of Theophilus, Bishop of Alexandria, the emperor issued an order at this time for the demolition of the heathen temples in that city; commanding also that it should be put in execution under the direction of Theophilus. Seizing this opportunity, Theophilus exerted himself to the utmost to expose the pagan mysteries to contempt. And to begin with, he caused the Mithraeum to be cleaned out, and exhibited to public view the tokens of its bloody mysteries. Then he destroyed the Serapeum, and the bloody rites of the Mithreum he publicly caricatured; the Serapeum also he showed full of extravagant superstitions, and he had the phalli of Priapus carried through the midst of the forum. [...] Thus this disturbance having been terminated, the governor of Alexandria, and the commander-in-chief of the troops in Egypt, assisted Theophilus in demolishing the heathen temples.
The Serapeum had housed part of the Great Library, but it is not known how many, if any, books were contained in it at the time of destruction. Notably, the passage by Socrates makes no clear reference to a library or its contents, only to religious objects. An earlier text by the historian Ammianus Marcellinus indicates that the library was destroyed in the time of Julius Caesar; whatever books might earlier have been housed at the Serapeum were no longer there in the last decade of the 4th century (Historia 22, 16, 12-13). The pagan scholar Eunapius of Sardis, witnessed the demolition, and though he detested Christians, his account of the Serapeum's destruction makes no mention of any library. When Orosius discusses the destruction of the Great Library at the time of Caesar in the sixth book of his History against the Pagans, he writes:
So perished that marvelous monument of the literary activity of our ancestors, who had gathered together so many great works of brilliant geniuses. In regard to this, however true it may be that in some of the temples there remain up to the present time book chests, which we ourselves have seen, and that, as we are told, these were emptied by our own men in our own day when these temples were plunderedthis statement is true enoughyet it seems fairer to suppose that other collections had later been formed to rival the ancient love of literature, and not that there had once been another library which had books separate from the four hundred thousand volumes mentioned, and for that reason had escaped destruction.
Thus Orosius laments the pillaging of libraries within temples in 'his own time' by 'his own men' and compares it to the destruction of the Great Library destroyed at the time of Julius Caesar. He is certainly referring to the plundering of pagan temples during his lifetime, but this presumably did not include the library of Alexandria, which he assumes was destroyed in Caesar's time. While he admits that the accusations of plundering books are true enough, he then suggests that the books in question were not copies of those that had been housed at the Great Library, but rather new books "to rival the ancient love of literature." Orosius does not say where temples' books were taken, whether to Constantinople or to local monastic libraries or elsewhere, and he does not say that the books were destroyed.
As for the Museum, Mostafa El-Abbadi writes in Life and Fate of the Ancient Library of Alexandria (1990):
The Mouseion, being at the same time a 'shrine of the Muses', enjoyed a degree of sanctity as long as other pagan temples remained unmolested. Synesius of Cyrene, who studied under Hypatia at the end of the fourth century, saw the Mouseion and described the images of the philosophers in it. We have no later reference to its existence in the fifth century. As Theon, the distinguished mathematician and father of Hypatia, herself a renowned scholar, was the last recorded scholar-member (c. 380), it is likely that the Mouseion did not long survive the promulgation of Theodosius' decree in 391 to destroy all pagan temples in the city.
John Julius Norwich, in his work Byzantium: The Early Centuries, places the destruction of the library's collection during the anti-Arian riots in Alexandria that transpired after the imperial decree of 391 (p.?314). Edward Gibbon claimed that the Library of Alexandria was pillaged or destroyed when Pope Theophilus of Alexandria demolished the Serapeum in 391.[18]
In AD 642, Alexandria was captured by the Muslim army of 'Amr ibn al-'As. There are four Arabic sources, all at least 500 years after the supposed events, which mention the fate of the library.
The story was still in circulation among Copts in Egypt in the 1920s.[24]
Edward Gibbon tells us that many people had credulously believed the story, but "the rational scepticism" of Fr. Eusbe Renaudot (1713) did not.[25]
Jean de Sismondi,[26] Alfred J. Butler, Victor Chauvin, Paul Casanova, Gustave Le Bon[27] and Eugenio Griffini did not accept the story either.[16]
Bernard Lewis has observed that this version was reinforced in medieval times by Saladin, who decided to break up the Fatimid Caliphate's collection of heretical Isma'ilism texts in Cairo following his restoration of Sunni Islam to Egypt, and will have judged that the story of the Caliph Umar's support of a library's destruction would make his own actions seem more acceptable.[28] Roy MacLeod[30]
Luciano Canfora included the account of Bar Hebraeus in his discussion of the destruction of the library without dismissing it.[31]
When does an employee qualify for workers compensation?
employees injured in the course of employment🚨Workers' compensation is a form of insurance providing wage replacement and medical benefits to employees injured in the course of employment in exchange for mandatory relinquishment of the employee's right to sue their employer for the tort of negligence. The trade-off between assured, limited coverage and lack of recourse outside the worker compensation system is known as "the compensation bargain". One of the problems that the compensation bargain solved is the problem of employers becoming insolvent as a result of high damage awards. The system of collective liability was created to prevent that, and thus to ensure security of compensation to the workers. Individual immunity is the necessary corollary to collective liability.
While plans differ among jurisdictions, provision can be made for weekly payments in place of wages (functioning in this case as a form of disability insurance), compensation for economic loss (past and future), reimbursement or payment of medical and like expenses (functioning in this case as a form of health insurance), and benefits payable to the dependents of workers killed during employment (functioning in this case as a form of life insurance).
General damage for pain and suffering, and punitive damages for employer negligence, are generally not available in workers' compensation plans, and negligence is generally not an issue in the case. These laws were first enacted in Europe and Oceania, with the United States following shortly thereafter.
Common law imposes obligations on employers to provide a safe workplace, provide safe tools, give warnings of dangers, provide adequate co-worker assistance (fit, trained, suitable "fellow servants") so that the worker is not overburdened, and promulgate and enforce safe work rules.[1]
Claims under the common law for worker injury are limited by three defenses afforded employers:
Workers' compensation statutes are intended to eliminate the need for litigation and the limitations of common law remedies by having employees give up the potential for pain- and suffering-related awards, in exchange for not being required to prove tort (legal fault) on the part of their employer. Designed to ensure employees who are injured or disabled on the job are not required to cover medical bills related to their on-the-job injury, the laws provide employees with monetary awards to cover loss of wages directly related to the accident as well as to compensate for permanent physical impairments.
The laws also provide benefits for dependents of those workers who are killed in work-related accidents or illnesses. Some laws also protect employers and fellow workers by limiting the amount an injured employee can recover from an employer and by eliminating the liability of co-workers in most accidents. US state statutes establish this framework for most employment. US federal statutes are limited to federal employees or to workers employed in some significant aspect of interstate commerce.[2]
As Australia experienced a relatively influential labour movement in the late 19th and early 20th century, statutory compensation was implemented very early in Australia. Each territory has its own legislation and its own governing body.
A typical example is Work Safe Victoria, which manages Victoria's workplace safety system. Its responsibilities include helping employees avoid workplace injuries occurring, enforcement of Victoria's occupational and safety laws, provision of reasonably priced workplace injury insurance for employers, assisting injured workers back into the workforce, and managing the workers' compensation scheme by ensuring the prompt delivery of appropriate services and adopting prudent financial practices.[3]
Compensation law in New South Wales has recently (2013) been overhauled by the state government. In a push to speed up the process of claims and to reduce the amount of claims, a threshold of 11% WPI (whole person impairment) was implemented.
Workers' compensation regulators for each of the states and territories are as follows:[4]
Every employer must comply with the state, territory or commonwealth legislation, as listed below, which applies to them:
The National Social Insurance Institute (in Portuguese, Instituto Nacional do Seguro Social ÿ INSS) provides insurance for those who contribute. It is a public institution that aims to recognize and grant rights to its policyholders. The amount transferred by the INSS is used to replace the income of the worker taxpayer, when he or she loses the ability to work, due to sickness, disability, age, death, involuntary unemployment, or even pregnancy and imprisonment. During the first 15 days the worker's salary is paid by the employer and after that by the INSS, as long as the inability to work lasts. Although the worker's income is guaranteed by the INSS, the employer is still responsible for any loss of working capacity, temporary or permanent, when found negligent or when its economic activity involves risk of accidents or developing labour related diseases.
Workers' compensation was Canada's first social program to be introduced as it was favored by both workers' groups and employers hoping to avoid lawsuits. The system arose after an inquiry by Ontario Chief Justice William Meredith who outlined a system in which workers were to be compensated for workplace injuries, but must give up their right to sue their employers. It was introduced in the various provinces at different dates. Ontario was first in 1915, Manitoba in 1916, and British Columbia in 1917. It remains a provincial responsibility and thus the rules vary from province to province. In some provinces, such as Ontario's Workplace Safety and Insurance Board, the program also has a preventative role ensuring workplace safety. In British Columbia, the occupational health and safety mandate (including the powers to make regulation, inspect and assess administrative penalties) is legislatively assigned to the Workers' Compensation Board of British Columbia WorkSafeBC. In most provinces the workers' compensation board or commission remains concerned solely with insurance. The workers' compensation insurance system in every province is funded by employers based on their payroll, industry sector and history of injuries (or lack thereof) in their workplace (usually referred to as "experience rating").
The German worker's compensation law of 6 July 1884,[15] initiated by Chancellor Otto von Bismarck,[16][17] was passed only after three attempts and was the first of its kind in the world.[18] Similar laws passed in Austria in 1887, Norway in 1894, and Finland in 1895.[19]
The law paid indemnity to all private wage earners and apprentices, including those who work in the agricultural and horticultural sectors and marine industries, family helpers and students with work-related injuries, for up to 13 weeks. Workers who are totally disabled get continued benefits at 67 percent after 13 weeks, paid by the accident funds, financed entirely by employers.
The German compensation system has been taken as a model for many nations.
Main article?: The Workmen's Compensation Act 1923
The Indian worker's compensation law 1923 was introduced on 5 March 1923. It includes Employer's liability compensation, amount of compensation.
Workers' accident compensation insurance is paired with unemployment insurance and referred to collectively as labour insurance.[20][21] Workers' accident compensation insurance is managed by the Labor Standards Office.[22]
The Workmen's Compensation Act 1952[23] is modelled on the United Kingdom's Workmen's Compensation Act 1906. Adopted before Malaysia's independence from the UK, it is now used only by non-Malaysian workers, since citizens are covered by the national social security scheme.
The Mexican Constitution of 1917 defined the obligation of employers to pay for illnesses or accidents related to the workplace. It also defined social security as the institution to administer the right of workers, but only until 1943 was the Mexican Social Security Institute created (IMSS). Since then, IMSS manages the Work Risks Insurance in a vertically integrated fashion: registration of workers and firms, collection, classification of risks and events, and medical and rehabilitation services. A reform in 1997 defined that contributions are related to the experience of each employer. Public sector workers are covered by social security agencies with corporate and operative structures similar to those of IMSS.
In New Zealand, all companies that employ staff and in some cases others, must pay a levy to the Accident Compensation Corporation, a Crown Entity, which administers New Zealand's universal no-fault accidental injury scheme. The scheme provides financial compensation and support to citizens, residents, and temporary visitors who have suffered personal injuries.
The United Kingdom was one of the first countries to have a workmen's insurance scheme, which operated from 1897 to 1946. This was created by the Workmen's Compensation Act 1897, expanded to include industrial diseases by the Workmen's Compensation Act 1906 and replaced by a state compensation scheme under the National Insurance (Industrial Injuries) Act 1946. Since 1976, this state scheme has been set out in the UK's Social Security Acts.
Work related safety issues in the UK are supervised by the Health & Safety Executive (HSE) who provide the framework by which employers and employees are able to comply with statutory rules and regulations.[24]
With the exception of the following, all employers are obliged to purchase compulsory Employers Liability Insurance in accordance with the Employers Liability (Compulsory Insurance) Act of 1969. The current minimum Limit of Indemnity required is S5,000,000 per occurrence. Market practice is to usually provide a minimum S10,000,000 with inner limits to S5,000,000 for certain risks e.g. workers on oil rigs and acts of terrorism.
These employers do not require Employers Liability Compulsory Insurance:
"Employees" are defined as anyone who has entered into or works under a contract of service or apprenticeship with an employer. The contract may be for manual labour, clerical work or otherwise, it may be written or verbal and it may be for full-time or part-time work.
These persons are not classed as employees and, therefore, are exempt:
Employees need to establish that their employer has a legal liability to pay compensation. This will principally be a breach of a statutory duty or under the tort of negligence. In the event that the employer is insolvent or no longer in existence Compensation can be sought directly from the insurer under the terms of the Third Party Rights Against Insurers Act of 1930.
History: see: Workmen's Compensation Act 1897 & following
In 1855, Georgia and Alabama passed Employer Liability Acts; 26 other states passed similar acts between 1855 and 1907. Early laws permitted injured employees to sue the employer and then prove a negligent act or omission.[26][27] (A similar scheme was set forth in Britain's 1880 Act.[28])
The first statewide worker's compensation law was passed in Maryland in 1902, and the first law covering federal employees was passed in 1906.[29] (See: FELA, 1908; FECA, 1916; Kern, 1918.) By 1949, every state had enacted a workers' compensation program.[30]
At the turn of the 20th century workers' compensation laws were voluntary.[citation needed] An elective law made passage easier and some argued that compulsory workers' compensation laws would violate the 14th amendment due process clause of the U.S. Constitution.[citation needed] Since workers' compensation mandated benefits without regard to fault or negligence, many felt that compulsory participation would deprive the employer of property without due process.[citation needed] The issue of due process was resolved by the United States Supreme Court in 1917 in New York Central Railway Co. v. White which held that an employer's due process rights were not impeded by mandatory workers' compensation.[31] After the ruling many states enacted new compulsory workers' compensation laws.[citation needed]
In the United States, most employees who are injured on the job receive medical care responsive to the workplace injury, and, in some cases, payment to compensate for resulting disabilities.[citation needed] Generally, an injury that occurs when an employee is on his or her way to or from work does not qualify for worker's compensation benefits; however, there are some exceptions if your responsibilities demand that you be in multiple locations, or stay in the course of your employment after work hours.[32]
Insurance policies are available to employers through commercial insurance companies: if the employer is deemed an excessive risk to insure at market rates, it can obtain coverage through an assigned-risk program.[citation needed] In many states, there are public uninsured employer funds to pay benefits to workers employed by companies who illegally fail to purchase insurance.[citation needed]
Various organizations focus resources on providing education and guidance to workers' compensation administrators and adjudicators in various state and national workers' compensation systems. These include the American Bar Association (ABA), the International Association of Industrial Accident Boards and Commissions (IAIABC),[33] the National Association of Workers' Compensation Judiciary (NAWCJ),[34] and the Workers Compensation Research Institute.[35]
In the United States, according to the Bureau of Labor Statistics' 2010 National Compensation Survey, workers' compensation costs represented 1.6% of employer spending overall, although rates varied significantly across industry sectors. For instance, workers' compensation accounted for 4.4% of employer spending in the construction industry, 1.8% in manufacturing and 1.3% in services.[36]
Clinical outcomes for patients with workers' compensation tend to be worse compared to those non-workers' compensation patients among those undergoing upper extremity surgeries, and have found they tend to take longer to return to their jobs and tend to return to work at lower rates. Factors that might explain this outcome include this patient population having strenuous upper extremity physical demands, and a possible financial gain from reporting significant post-operative disability.[37]
As each state within the United States has its own workers' compensation laws, the circumstances under which workers' compensation is available to workers, the amount of benefits that a worker may receive, and the duration of the benefits paid to an injured worker, vary by state.[38] The workers' compensation system is administered on a state-by-state basis, with a state governing board overseeing varying public/private combinations of workers' compensation systems.[citation needed] The names of such governing boards, or "quasi-judicial agencies," vary from state to state, many being designated as "workers' compensation commissions". In North Carolina, the state entity responsible for administering the workers' compensation system is referred to as the North Carolina Industrial Commission[39]
In a majority of states, workers' compensation is solely provided by private insurance companies.[40] Twelve states operate state funds (that serve as models to private insurers and insures state employees), and a handful of states have state-owned monopoly insurance providers.[41] To keep state funds from crowding out private insurers, the state funds may be required to act as assigned-risk programs or insurers of last resort for businesses that cannot obtain coverage from a private insurer.[42] In contrast, private insurers can turn away the worst risks and may also write comprehensive insurance packages covering general liability, natural disasters, and other forms of insurance coverage.[43] Of the twelve state funds, the largest is California's State Compensation Insurance Fund.
Underreporting of injuries is a significant problem in the workers' compensation system.[44] Workers, fearing retaliation from their employers, may avoid reporting injuries incurred on the job and instead seek treatment privately, bearing the cost themselves or passing these costs on to their health insurance provider ÿ an element in the increasing cost of health insurance nationwide.[45]
In all states except Georgia and Mississippi, it is illegal for an employer to terminate or refuse to hire an employee for having reported a workplace injury or filed a workers' compensation claim.[46] However, it is often not easy to prove discrimination on the basis of the employee's claims history.[citation needed] To abate discrimination of this type, some states have created a "subsequent injury trust fund" which will reimburse insurers for benefits paid to workers who suffer aggravation or recurrence of a compensable injury.[citation needed] It is also suggested that laws should be made to prohibit inclusion of claims history in databases or to make it anonymous.[citation needed] (See privacy laws.)
Although workers' compensation statutes generally make the employer completely immune from any liability (such as for negligence) above the amount provided by the workers' compensation statutory framework, there are exceptions. In some states, like New Jersey, an employer can still be held liable for larger amounts if the employee proves the employer intentionally caused the harm,[47] while in other states, like Pennsylvania,[48] the employer is immune in all circumstances, but other entities involved in causing the injury, like subcontractors or product manufacturers, may still be held liable.
Some employers vigorously contest employee claims for workers' compensation payments.[citation needed] Injured workers may be able to get help with their claims from state agencies or by retaining a workers' compensation lawyer.[49] Laws in many states limit a claimant's legal expenses to a certain fraction of an award; such "contingency fees" are payable only if the recovery is successful. In some states this fee can be as high as 40% or as little as 11% of the monetary award recovered, if any.[38]
In the vast majority of states, original jurisdiction over workers' compensation disputes has been transferred by statute from the trial courts to special administrative agencies.[50] Within such agencies, disputes are usually handled informally by administrative law judges. Appeals may be taken to an appeals board and from there into the state court system. However, such appeals are difficult and are regarded skeptically by most state appellate courts, because the point of workers' compensation was to reduce litigation. A few states still allow the employee to initiate a lawsuit in a trial court against the employer. For example, Ohio allows appeals to go before a jury.[51]
The California Constitution, Article XIV section 4, sets forth the intent of the people to establish a system of workers' compensation.[52] This section provides the Legislature with the power to create and enforce a complete system of workers' compensation and, in that behalf, create and enforce a liability on the part of any or all employers to compensate any or all of their employees for injury or disability, and their dependents, for death incurred or sustained by said employees in the course of their employment, irrespective of the fault of any employee. Further, the Constitution provides that the system must accomplish substantial justice in all cases expeditiously, inexpensively, and without incumbrance of any character. It was the intent of the people of California when they voted to amend the state constitution in 1918, to require the Legislature to establish a simple system that guaranteed full provision for adequate insurance coverage against liability to pay or furnish compensation. Providing a full provision for regulating such insurance coverage in all its aspects, including the establishment and management of a State compensation insurance fund; full provision for otherwise securing the payment of compensation; and full provision for vesting power, authority and jurisdiction in an administrative body with all the requisite governmental functions to determine any dispute or matter arising under such legislation, in that the administration of such legislation accomplish substantial justice in all cases expeditiously, inexpensively, and without encumbrance of any character. All of which matters is the people expressly declared to be the social public policy of this State, binding upon all departments of the State government.[53]
Texas allows employers to opt out of the workers' compensation system, with those employers who do not purchase workers' compensation insurance being called nonsubscribers.[54] However, those employers, known as non-subscribers, are exposed to legal liability in the event of employee injury. The employee must demonstrate that employer negligence caused the injury; if the employer does not subscribe to workers' compensation, the employer loses their common law defense of contributory negligence, assumption of the risk, and the fellow employee doctrine.[55] If successful, the employee can recover their full common law damages, which are more generous than workers' compensation benefits.
In 1995, 44% of Texas employers were nonsubscribers, while in 2001 the percentage was estimated to be 35%.[54] The industry advocacy group Texas Association of Business Nonsubscription claims that nonsubscribing employers have had greater satisfaction ratings and reduced expenses when compared to employers enrolled in the workers' compensation system.[56] A research survey by Texas's Research and Oversight Council on Workers' Compensation found that 68% of non-subscribing employers and 60% of subscribing employersa majority in both caseswere satisfied with their experiences in the system, and that satisfaction with nonsubscription increased with the size of the firm; but it stated that further research was needed to gauge satisfaction among employees and to determine the adequacy of compensation under nonsubscription compared to subscription.[54] In recent years, the Texas Supreme Court has been limiting employer duties to maintain employee safety, limiting the remedies received by injured workers.
In recent years, workers' compensation programs in West Virginia and Nevada were successfully privatised, through mutualisation, in part to resolve situations in which the programs in those states had significantly underfunded their liabilities.[citation needed] Only four states rely on entirely state-run programs for workers' compensation: North Dakota, Ohio, Washington, and Wyoming.[citation needed] Many other states maintain state-run funds but also allow private insurance companies to insure employers and their employees, as well.[citation needed]
The federal government has its own workers' compensation program, subject to its own requirements and statutory parameters for federal employees.[citation needed] The federal government pays its workers' compensation obligations for its own employees through regular appropriations.[citation needed]
Employees of common carriers by rail have a statutory remedy under the Federal Employers' Liability Act, 45 U.S.C. sec. 51, which provides that a carrier "shall be liable" to an employee who is injured by the negligence of the employer. To enforce his compensation rights, the employee may file suit in United States district court or in a state court. The FELA remedy is based on tort principles of ordinary negligence and differs significantly from most state workers' compensation benefit schedules.
Seafarers employed on United States vessels who are injured because of the owner's or the operator's negligence can sue their employers under the Jones Act, 46 U.S.C. App. 688., essentially a remedy very similar to the FELA one.
Dock workers and other maritime workers, who are not seafarers working aboard navigating vessels, are covered by the Federal Longshore and Harbor Workers' Compensation Act, known as US L&H.
Workers' compensation fraud can be committed by doctors, lawyers, employers, insurance company employees and claimants, and may occur in both the private and public sectors.[57]
The topic of workers' compensation fraud is highly controversial, with claimant supporters arguing that fraud by claimants is rareas low as one-third of one percent,[58] others focusing on the widely reported National Insurance Crime Bureau statistic that workers' compensation fraud accounts for $7.2 billion in unnecessary costs,[59] and government entities acknowledging that "there is no generally accepted method or standard for measuring the extent of workers' compensation fraud ... as a consequence, there are widely divergent opinions about the size of the problem and the relative importance of the issue."[60]
According to the Coalition Against Insurance Fraud, tens of billions of dollars in false claims and unpaid premiums are stolen in the U.S. alone every year.[61]
The most common forms of workers' compensation fraud by workers are:
The most common forms of workers' compensation fraud by employers are:
What type of motor used in vacuum cleaner?
universal motors🚨The universal motor is so named because it is a type of electric motor that can operate on AC or DC power. It is a commutated series-wound motor where the stator's field coils are connected in series with the rotor windings through a commutator. It is often referred to as an AC series motor. The universal motor is very similar to a DC series motor in construction, but is modified slightly to allow the motor to operate properly on AC power. This type of electric motor can operate well on AC because the current in both the field coils and the armature (and the resultant magnetic fields) will alternate (reverse polarity) synchronously with the supply. Hence the resulting mechanical force will occur in a consistent direction of rotation, independent of the direction of applied voltage, but determined by the commutator and polarity of the field coils.[1]
Universal motors have high starting torque, can run at high speed, and are lightweight and compact. They are commonly used in portable power tools and equipment, as well as many household appliances. They're also relatively easy to control, electromechanically using tapped coils, or electronically. However, the commutator has brushes that wear, so they are much less often used for equipment that is in continuous use. In addition, partly because of the commutator, universal motors are typically very noisy, both acoustically and electromagnetically.[2]
Not all series wound motors operate well on AC current.[3][note 1] If an ordinary series wound DC motor were connected to an AC supply, it would run very poorly. The universal motor is modified in several ways to allow for proper AC supply operation. There is a compensating winding typically added, along with laminated pole pieces, as opposed to the solid pole pieces found in DC motors.[1] A universal motor's armature typically has far more coils and plates than a DC motor, and hence fewer windings per coil. This reduces the inductance.[4]
Even when used with AC power these types of motors are able to run at a rotation frequency well above that of the mains supply, and because most electric motor properties improve with speed, this means they can be lightweight and powerful.[4] However, universal motors are usually relatively inefficient: around 30% for smaller motors and up to 70-75% for larger ones.[4]
Series wound electric motors respond to increased load by slowing down; the current increases and the torque rises in proportion to the square of the current since the same current flows in both the armature and the field windings. If the motor is stalled, the current is limited only by the total resistance of the windings and the torque can be very high, and there is a danger of the windings becoming overheated. The counter-EMF aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate. At that instant, the counter-EMF is zero and the only factor limiting the armature current is the armature resistance. Usually the armature resistance of a motor is low; therefore the current through the armature would be very large when the power is applied. Therefore the need can arise for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter-EMF. As the motor rotation builds up, the resistance is gradually cut out. The speed-torque characteristic is an almost perfectly straight line between the stall torque and the no-load speed. This suits large inertial loads as the speed will drop until the motor slowly starts to rotate and these motors have a very high stalling torque.[5]
As the speed increases, the inductance of the rotor means that the ideal commutating point changes. Small motors typically have fixed commutation. While some larger universal motors have rotatable commutation, this is rare. Instead larger universal motors often have compensation windings in series with the motor, or sometimes inductively coupled, and placed at ninety electrical degrees to the main field axis. These reduce the reactance of the armature, and improve the commutation.[4]
One useful property of having the field windings in series with the armature winding is that as the speed increases the counter EMF naturally reduces the voltage across, and current through the field windings, giving field weakening at high speeds. This means that the motor has no theoretical maximum speed for any particular applied voltage. Universal motors can be and are generally run at high speeds, 4000-16000 rpm, and can go over 20,000 rpm.[4] By way of contrast, AC synchronous and squirrel cage induction motors cannot turn a shaft faster than allowed by the power line frequency. In countries with 60 Hz(cycle/Sec) AC supply, this speed is limited to 3600 RPM.[6]
Motor damage may occur from over-speeding (running at a rotational speed in excess of design limits) if the unit is operated with no significant mechanical load. On larger motors, sudden loss of load is to be avoided, and the possibility of such an occurrence is incorporated into the motor's protection and control schemes. In some smaller applications, a fan blade attached to the shaft often acts as an artificial load to limit the motor speed to a safe level, as well as a means to circulate cooling airflow over the armature and field windings. If there were no mechanical limits placed on a universal motor it could theoretically speed out of control in the same way any series-wound DC motor can.[2]
An advantage of the universal motor is that AC supplies may be used on motors which have some characteristics more common in DC motors, specifically high starting torque and very compact design if high running speeds are used.[2]
A negative aspect is the maintenance and short life problems caused by the commutator, as well as electromagnetic interference (EMI) issues due to any sparking. Because of the relatively high maintenance commutator brushes, universal motors are best-suited for devices such as food mixers and power tools which are used only intermittently, and often have high starting-torque demands.
Another negative aspect is that these motors may only be used where mostly-clean air is present at all times. Due to the dramatically increased risk of overheating, totally-enclosed fan cooled universal motors would be impractical, though some have been made. Such a motor would need a large fan to circulate enough air, decreasing efficiency since the motor must use more energy to cool itself. The impracticality comes from the resulting size, weight, and thermal management issues which open motors have none of.
Continuous speed control of a universal motor running on AC is easily obtained by use of a thyristor circuit, while multiple taps on the field coil provide (imprecise) stepped speed control. Household blenders that advertise many speeds frequently combine a field coil with several taps and a diode that can be inserted in series with the motor (causing the motor to run on half-wave rectified AC).
Universal motors are series wound. Shunt winding was used experimentally, in the late 19th century,[7] but was impractical owing to problems with commutation. Various schemes of embedded resistance, inductance and antiphase cross-coupling were attempted to reduce this. Universal motors, including shunt wound, were favoured as AC motors at this time as they were self-starting.[3] When self-starting induction motors and automatic starters became available, these replaced the larger universal motors (above 1?hp) and the shunt wound.
In the past, repulsion-start wound-rotor motors provided high starting torque, but with added complexity. Their rotors were similar to those of universal motors, but their brushes were connected only to each other. Transformer action induced current into the rotor. Brush position relative to field poles meant that starting torque was developed by rotor repulsion from the field poles. A centrifugal mechanism, when close to running speed, connected all commutator bars together to create the equivalent of a squirrel-cage rotor. As well, when close to operating speed, better motors lifted the brushes out of contact.
Operating at normal power line frequencies, universal motors are often found in a range less than 1000 watts. Their high speed makes them useful for appliances such as blenders, vacuum cleaners, and hair dryers where high speed and light weight are desirable. They are also commonly used in portable power tools, such as drills, sanders, circular and jig saws, where the motor's characteristics work well. Many vacuum cleaner and weed trimmer motors exceed 10,000 RPM, while many Dremel and similar miniature grinders exceed 30,000 RPM.
Universal motors also lend themselves to electronic speed control and, as such, were an ideal choice for domestic washing machines. The motor can be used to agitate the drum (both forwards and in reverse) by switching the field winding with respect to the armature. The motor can also be run up to the high speeds required for the spin cycle. Nowadays, variable-frequency drive motors are more commonly used instead.
Universal motors also formed the basis of the traditional railway traction motor in electric railways. In this application, the use of AC to power a motor originally designed to run on DC would lead to efficiency losses due to eddy current heating of their magnetic components, particularly the motor field pole-pieces that, for DC, would have used solid (un-laminated) iron. Although the heating effects are reduced by using laminated pole-pieces, as used for the cores of transformers and by the use of laminations of high permeability electrical steel, one solution available at the start of the 20th century was for the motors to be operated from very low frequency AC supplies, with 25 and 16 ?2?3 Hz operation being common.
Starters of combustion engines are usually universal motors, with the advantage of being small and having high torque at low speed. Some starters have permanent magnets, others have 1 out of the 4 poles wound with a shunt coil rather than series wound coils.
Who was the painter of this famous painting bharatmata?
the Indian painter Abanindranath Tagore🚨
Bharat Mata is a work painted by the Indian painter Abanindranath Tagore in 1905. The work depicts Bharat Mata, or Mother India, in the style of a Hindu Goddess. The painting was the first illustrated depiction of the concept, and was painted during with Swadesh ideals during the larger Indian Independence movement.
Abanindranath Tagore was born in August 7, 1871[1] to Gunendranath Tagore. A nephew of the Indian poet and artist Rabindranath Tagore, Abanindranath was exposed at an early age to the artistic inclinations of the Tagore family.[2]
Tagore had been exposed to learning art when he first studied at the Sanskrit College in Kolkata in the 1880s.[3] In his early years, Tagore had painted in the European naturalistic style, evident from his early paintings such as The Armoury.[4] In about 1886 or 1887, Tagore's relative Gyanadanandini Devi had set up a meeting between Tagore and E.B Havell, who was the curator of the Government school of Art in Calcutta. The meeting resulted in a series of exchanges between Havell and Tagore, with Havell gaining a native art collaborator with ideas in the same direction of his own, and Tagore gaining a teacher who would teach him about the 'science' of Indian art history.[5]
Havell attempted to induct Tagore as the Vice Principal of the art school, which was faced with heavy opposition in the school. Havell had to bend much of the school rules to do this, and tolerated many of Tagore's habits including the smoking of hookah in the classrooms and refusing to stick to time schedules.[6] Havell introduced innovations to his teaching program in an attempt to more accurately reproduce Indian art pedagogy, and replaced European copies of art with Indian originals. The English art curator had also reportedly spent many hours behind closed doors expalaining the details of "Hindu art and scultpure" to Tagore. One of these paintings, of a stork by a Mughal-era artist, had been shown to Tagore by Havell, causing the former to remark that he was unaware until then of the "embarassment of riches" that "our art" had contained.[7]
Bharat Mata depicts a saffron clad woman, holding a book, sheaves of paddy, a piece of white cloth and a garland in her four hands. The painting holds historical significance as it is one of the earliest visualizations of Bharat Mata, or "Mother India."[8]
The work was painted during the time of the swadeshi movmement. The movement began as a response to the Partition of Bengal (1905), when Lord Curzon split the largely Muslim eastern areas of Bengal from the largely Hindu western areas. Indian nationalists viewed this as a way for the British to increase the power of the Muslim peasantry of eastern Bengal at the expense of Bengali Hindus. In response, Indian nationalists participating in the swadeshi movement resisted the British by boycotting British goods and public institutions, instantiating meetings and processions, forming committees, and applying diplomatic pressure.[9]
The painting's central figure holds multiple items associated with Indian culture and the economy of India in the early twentieth century, such as a book, sheaves of paddy, a piece of white cloth and a garland. Moreover, the painting's central figure has four hands, evocative of Hindu imagery, which equates multiple hands with immense power.[10]
The painting has been characterized as "an attempt of humanisation of Bharat Mata where the mother is seeking liberation through her sons," by Jayanta Sengupta, curator of the Indian Museum in Kolkata, India.[11]
Since 1905, many iterations of the Bharat Mata have been made in paintings and other forms of art. However, the significance of Tagore's original painting is still recognized. In 2016, Bharat Mata was put on display at the Victoria Memorial Hall in Kolkata, India.[12]
Sister Nivedita, the inspiration behind the Bengal School of Art, praised the painting by saying:
From beginning to end, the picture is an appeal, in the Indian language, to the Indian heart. It is the first great masterpiece in a new style. I would reprint- it, if I could, by tens of thousands, and scatter it broadcast over the land, till there was not a peasant's cottage, or a craftman's hut, between Kedar Nath and Cape Comorin, that had not this presentment of Bharat-Mata somewhere on its walls. Over and over again, as one looks into its qualities, one is struck by the purity and delicacy of the personality portrayed.[13]
Who founded the pushti marg sect of the hindu religion?
Vallabhacharya🚨Pushti marg[1] ("the Path of Grace") is a Vaishnav sect of the Hinduism, founded by Vallabhacharya (also known as Mahaprabhuji) around 1500 AD.[2]
Vallabhacharya is one of the six main Acharyas of the Bhakti tradition of Hinduism. (The other five being Shankaracharya, Ramanujacharya, Madhavacharya, Shri Nimbarkacharya and Shri Chaitanya Mahaprabhu.) He propagated the philosophy of Shuddhadvaita[3] which forms the basis of Pushtimarg devotional practice. These acharyas have made significant contribution to the bhakti movement and led to the medieval rise in popularity of the Hindu Religion. The devotional movement is based on the idea that love of God should be seen as an end in itself, not as a means to something else.[2][4]
Vallabhacharya was born into a Telugu Brahmin family in South India, now in Andhra Pradesh. His ancestors had a religious background and includes scholars like Yagnanarayan Bhatt and Ganapati Bhatt. They wrote several books on religion and devotion. Vallabhacharya was the second son of Lakshman Bhatt and Yallammagaru. Their ancestors had performed several Soma-yagnas and Shri Lakshman Bhatt completed 100 Somyagnas. Yagnanarayan was blessed by Lord Vishnu, that on completion of 100 Soma-yagnas, God himself would incarnate in his family.
Thus when 100 Soma-yagnas were complete, Lakshman Bhatt went to Kashi to accomplish his vow of feeding 125,000 Brahmins. He could not complete this task as there were political disturbances in Kashi. He took his pregnant wife Yallammagaru and on his way southwards he halted at a place called Champaranya. There, his wife gave birth to a still baby which they kept under a tree and proceeded ahead. On the same night Lakshman Bhatt heard a celestial voice ordering him to go back to the baby and pick it up as it was misunderstood to be a still born. On reaching the spot where they had kept the baby, they found the baby encircled by a divine fire as a protecting spirit.
Vallabh was a brilliant child. He finished studying Vedas and prominent scriptures at a very early age. At the age of 11 he started his all India pilgrimage. During this tour he came to Vijaynagar where he came to know about a sensational debate that was being conducted in the court of King Krishnadevraya. The debate was between the different Acharyas over the question whether the relationship between the world and God is dualistic or non-dualistic. Vallabh entered the court and with his unopposed arguments proved that God is pure and non-dualistic i.e. Shuddhadwait. His philosophy thenceforth came to be known as Shuddhadwait Brahmvaad. These philosophy is later incorporated in "Vallabh Digvijay"
During the second pilgrimage, Lord Krishna appeared in the form of Lord Shrinathji in front of him and ordered him to reestablish Pushti Marg and propagate the pushti kind of devotion among the chosen ones and bring them back to their original state in God's own domain. i.e. Vaikuntha or Golok-dham . But the question in Shri Vallabh's mind was that the divine souls in this world too are highly influenced by the materialistic world and their souls and body have lost the kind of purity that is needed for their reunion with the Supreme entity i.e. Lord Krishna.
Lord Shrinathji assured him that with "brahamasambandha", (relationship with God) whichever soul is admitted into the Pushti marg, all its impurities will refrain from obstructing the soul's relation with Himself and the soul will be eligible to pursue His bhakti. That was the night of Pavitra Ekadashi (Four days before the new moon day) of the auspicious month of Shravana. Lord Shrinathji taught him the Brahamasambandha mantra and asked him to bring back the divine souls back to him.
On the following day Vallabhacharya initiated his first disciple Damodardas Harsani with this mantra along with the principles of Pushtimarga. This was how Pushtimarga was established.[2]
The formal initiation into Pushtimarg is called Brahmasambandha. The absolute and exclusive rights to grant "Brahmsambandh" in the path of grace, in order to transform an Ordinary jiva (soul) into a Pushti "Jeev" lie only with the descendants of Vallabhacharya, known as Goswami Balaks - Vallabhkul (The word "Goswami" literally means - the one who has control over all the senses), who Vallabh Vaishnavas respectfully and lovingly refer to as: "Goswami","Bawa" or "Jai Jai". They are the actual and direct descendants of Vallabhacharya Mahaprabhu. Goswamis are responsible for the "pushti"(literally means spiritual nourishment) of all the disciples initiated by them.
Brahmsabandha is a process, where after fasting for one full day(consuming fruits and milk only) one is given the Krishna "Gadhya Mantra" in front of a Deity "Swaroop" by a Vallabhkul Goswami after which tulsi leaves (Indian Basil) are offered to the lotus feet of the Lord. The Adhikaar(right) to perform daily "seva" comes only after one is initiated into Pushtimarg by means of formally granting Brahmsambandh by a Goswami Balak. Without brahmsambandh one does not hold the right to perform seva of a Pusht Swaroop (Deity) (the swaroop which showers grace just like it did on the gopis).[2]
Krishna is the chief deity of the sect. Shri Yamunaji is worshiped as his fourth consort(chaturth Patrani) and is the goddess who ordered Shri Vallabhacharya to recite Shrimad Bhagwat (Shrimad Bhagwat Parayan) near her banks. It is for Shri Yamunaji, Shri Vallabhacharyaji composed Shri Yamunashtakam.
Several forms/icons of Shri Krishna are worshiped in the sect. Here are the main forms, their description and where they currently reside.
Seva is a key element of worship in Pushti Marg. All followers are expected to do seva to their personal icon of Krishna. In Pushti Marg, where the descendants of shrimad Vallabhcharyaji reside and perform Seva of their own idol of Shri krishna is called a "haveli" - literally a "mansion". Here the seva of thakurji(Shri Krishna) is performed with the bhaav of the Nandalaya. There is a daily routine of allowing the laity to have "darshan" (adore) the divine icon 8 times a day. The Vallabhkul adorn the icon in keeping with Pushti traditions and follow a colourful calendar of festivals.
Some of the important aspects of Pushtimarg Seva are:
All of the above three are included in the daily seva (devotional service) which all followers of Pushtimarg offer to their Thakurji (personal Krishna deity), and all of them have been traditionally prescribed by Goswami Shri Vitthalnathji almost five hundred years ago. Shri Vitthalnathji is also called Gusainji (Vallabhacharya's second son). The raag, bhog, and vastra and shringar offerings vary daily according to the season, the date, and time of day, and this is the main reason why this path is so colourful and alive.
Seva is the most important way to attain Pushti in Pushtimarg and has been prescribed by Vallabhacharya as the fundamental tenet. All principles and tenets of Shuddhadvaita Vaishnavism stem out from here.
Baithak or Bethak, literally "seat", is the site considered sacred by the followers of the Pushtimarg for performing devotional rituals. These sites are spread across India and are chiefly concentrated in Braj region in Uttar Pradesh and in western state of Gujarat. Total 142 Baithaks are considered sacred; 84 of Vallabhacharya, 28 of his son Viththalanath Gusainji and 30 of his seven grandsons.
Pushti Marg is famous for its innumerable colourful festivals. Icons (mentioned above) are wonderfully dressed and bejeweled to suite the season and the mood of the festival. All festivals are accompanied by a wonderful vegetarian feast which is offered to the deity and later distributed to the laity. Most festivals mark
Major doctrine consist of works of Vallabhacharya.
He wrote elaborate commentaries on Sanskrit scriptures, the Brahma-Sutras (Anubhasya[5]), and Shreemad Bhagwatam (Shree Subodhini ji, Tattvarth Dip Nibandh).
Also, in order to help devotees on this path of devotion, he wrote 16 pieces in verse which we know as the Shodasha Granthas. These came about as answers to devotees. The verses define the practical theology of Pushtimarga.
The Shodash Granthas(doctrines) serve as a lighthouse for devotees. They speak about increasing love for Shri Krishna through Seva (service) and Smarana (remembering). These doctrines are Mahaprabhus way of encouraging and inspiring devotees on this path of grace. The central message of the Shodasha Granthas is, total surrender to the Lord. A Goswami can initiate an eager soul to this path of Shri Krishnas loving devotion and service. The verses explain the types of devotees, the way to surrender and the reward for Seva, as well as other practical instructions. The devotee is nurtured by the Lords grace.
Apart from Shodash Granths Shri Vallabhacharya wrote following Granths " Books?:
In Pushtimarg there are many Sahityas available to understand the tenets established by Shri Vallabhacharyaji. These Sahityas are available in many forms e.g. Granth (eBooks), Pravachan (audio & video), Kirtan etc. Since there is abundant of Sahitya sometimes it becomes difficult how to start and which Sahitya is applicable so as to acquire the proper understanding.
Here we are making an attempt to categorize the Sahityas not only to cater to different age groups but also based on the level of the individual.
The subject of Pushtimarg has been explained by scholars of the sect, such as Devarshi Ramanath Shastri who has enunciated well the tenets of this path in his books 'Pushtimargiya Swaroop Sewa',[6] and 'Pushtimargiya Nityasewa Smaran'.[7]
For more free books, Sahitya and Pushtimargiy Pravachan's please visit http://pushtimarg.net/
Who are the surviving members of lynyrd skynyrd?
Johnny Van Zant🚨Lynyrd Skynyrd (/?l?n?rd ?sk?n?rd/ LEN-?rd-SKIN-?rd)[2] is an American rock band best known for having popularized the Southern rock genre during the 1970s. Originally formed in 1964 as My Backyard in Jacksonville, Florida, the band was also known by names such as The Noble Five and One Percent, before finally deciding on "Lynyrd Skynyrd" in 1969. The band gained worldwide recognition for its live performances and signature songs "Sweet Home Alabama" and "Free Bird". At the peak of their success, band members Ronnie Van Zant and Steve Gaines, and backup singer Cassie Gaines, died in an airplane crash in 1977, putting an abrupt end to the 1970s era of the band.
The surviving band members re-formed in 1987 for a reunion tour with lead vocalist Johnny Van Zant, the younger brother of Ronnie Van Zant. Lynyrd Skynyrd continues to tour and record with co-founder Gary Rossington, Johnny Van Zant, and Rickey Medlocke, who first wrote and recorded with the band from 1971 to 1972 before his return in 1996. Fellow founding member Larry Junstrom, along with 1970s members Ed King and Artimus Pyle, remain active in music but no longer tour or record with the band. Michael Cartellone has recorded and toured with the band since 1999.
Lynyrd Skynyrd has sold 28 million records in the United States. They were inducted into the Rock and Roll Hall of Fame on March 13, 2006.[3] On January 25, 2018, Lynyrd Skynyrd announced their farewell tour.[4]
In the summer of 1964, teenage friends Ronnie Van Zant, Bob Burns, Allen Collins, Gary Rossington, and Larry Junstrom formed the earliest incarnation of the band in Jacksonville, Florida as My Backyard. The band then changed its name to The Noble Five.[5] The band used different names before using One Percent during 1968.[5]
In 1969, Van Zant sought a new name. The group settled on Leonard Skinnerd, a mocking tribute to physical education teacher Leonard Skinner at Robert E. Lee High School.[6] Skinner was notorious for strictly enforcing the school's policy against boys having long hair.[7] Rossington dropped out of school, tired of being hassled about his hair.[8] The more distinctive spelling "Lynyrd Skynyrd" was being used at least as early as 1970. Despite their high school acrimony, the band developed a friendlier relationship with Skinner in later years, and invited him to introduce them at a concert in the Jacksonville Memorial Coliseum.[9] Skinner also allowed the band to use a photo of his Leonard Skinner Realty sign for the inside of their third album.[10]
By 1970, Lynyrd Skynyrd had become a top band in Jacksonville, headlining at some local concerts, and opening for several national acts. Pat Armstrong, a Jacksonville native and partner in Macon, Georgia-based Hustlers Inc. with Phil Walden's younger brother, Alan Walden, became the band's managers. Armstrong left Hustlers shortly thereafter to start his own agency. Walden stayed with the band until 1974, when management was turned over to Peter Rudge. The band continued to perform throughout the South in the early 1970s, further developing their hard-driving blues rock sound and image, and experimenting with recording their sound in a studio. Skynyrd crafted this distinctively "southern" sound through a creative blend of blues, and a slight British rock influence.[11]
During this time, the band experienced some lineup changes for the first time. Junstrom left and was briefly replaced by Greg T. Walker on bass. At that time, Ricky Medlocke joined as a second drummer and occasional second vocalist to help fortify Burns' sound on the drums. Medlocke grew up with the founding members of Lynyrd Skynyrd and his grandfather Shorty Medlocke was an influence in the writing of "The Ballad of Curtis Loew". Some versions of the band's history state Burns briefly left the band during this time,[12] although other versions state that Burns played with the band continuously through 1974. [13] The band played some shows with both Burns and Medlocke, using a dual-drummer approach. In 1971, they made some recordings at the famous Muscle Shoals Sound Studio with Walker and Medlocke serving as the rhythm section, but without the participation of Burns.[citation needed] Medlocke and Walker left the band to play with another southern rock band, Blackfoot. When Lynyrd Skynyrd made a second round of Muscle Shoals recordings in 1972, Burns was once again featured on drums along with new bassist, Leon Wilkeson. Medlocke and Walker did not appear on any album until the 1978 release of First and... Last, which compiled the early Muscle Shoals sessions. Also in 1972, roadie Billy Powell became the keyboardist for the band.
In 1972, the band (then comprising Van Zant, Collins, Rossington, Burns, Wilkeson, and Powell) was discovered by musician, songwriter, and producer Al Kooper of Blood, Sweat & Tears, who had attended one of their shows at Funocchio's in Atlanta. Kooper signed them to his Sounds of the South label that was to be distributed and supported by MCA Records, and produced their first album. Wilkeson, citing nervousness about fame, temporarily left the band during the early recording sessions for the album, only playing on two tracks. He rejoined the band shortly after the album's release at Van Zant's invitation[citation needed] and is pictured on the album cover. To replace him, Strawberry Alarm Clock guitarist Ed King joined the band and played bass on the album (the only part which Wilkeson had not already written being the solo section in "Simple Man"), and also contributed to the songwriting and did some guitar work on the album. After Wilkeson rejoined, King stayed in the band and switched solely to guitar, allowing the band to replicate its three-guitar studio mix in live performances. Released on August 13, 1973,[14] the self-titled album with the subtitle "Pronounced Leh-nerd Skin-nerd" sold over one million copies, and was awarded a gold disc by the RIAA.[15] The album featured the hit song "Free Bird", which received national airplay,[16] eventually reaching No.?19 on the Billboard Hot 100 chart.[citation needed]
Lynyrd Skynyrd's fan base continued to grow rapidly throughout 1973, largely due to their opening slot on the Who's Quadrophenia tour in the United States. Their 1974 follow-up, Second Helping, featuring King, Collins and Rossington all collaborating with Van Zant on the songwriting, cemented the band's breakthrough. Its single, "Sweet Home Alabama", a response to Neil Young's "Southern Man", reached #8 on the charts that August. (Young and Van Zant were not rivals, but fans of each other's music and good friends; Young wrote the song "Powderfinger" for the band, but they never recorded it.)[17] During their peak years, each of their records sold over one million copies, but "Sweet Home Alabama" was the only single to crack the top ten. [18] The Second Helping album reached No.?12 in 1974, eventually going multi-platinum. In July of that year, Lynyrd Skynyrd was one of the headline acts at The Ozark Music Festival held at the Missouri State Fairgrounds in Sedalia, Missouri.[citation needed]
In January 1975, drummer Burns left the band and was replaced by Kentucky native Artimus Pyle. The band's third album, Nuthin' Fancy, was recorded in 17 days.[19] Kooper and the band parted by mutual agreement before its release, with Kooper left with the tapes to complete the mix.[20] It had lower sales than its predecessor. Midway through its tour, Ed King left the band, citing tour exhaustion. In January 1976, backup singers Leslie Hawkins, Cassie Gaines and JoJo Billingsley (collectively known as The Honkettes) were added, although they were not considered official members. Lynyrd Skynyrd's fourth album Gimme Back My Bullets was released, but did not achieve the same success as the previous two albums. Van Zant and Collins both felt that the band was seriously missing the three-guitar attack that had been one of its early hallmarks. Although Skynyrd auditioned several guitarists, including such high-profile names as Leslie West, its search continued until Cassie Gaines began touting the guitar and songwriting prowess of her younger brother, Steve.
The junior Gaines, who led his own band, Crawdad (which occasionally would perform Skynyrd's "Saturday Night Special" in their set), was invited to audition onstage with Skynyrd at a concert in Kansas City on May 11, 1976. Liking what they heard, the group also jammed informally with the Oklahoma native several times, then invited him into the group in June. With Gaines on board, the newly reconstituted band recorded the double-live album One More from the Road at the Fox Theatre (Atlanta, Georgia) in Atlanta, and performed at the Knebworth festival, which also featured the Rolling Stones.[citation needed] Coincidentally, Gaines shared the same birth date as his predecessor Ed King.[21]
Both Collins and Rossington had serious car accidents over Labor Day weekend in 1976, which slowed the recording of the follow-up album and forced the band to cancel some concert dates. Rossington's accident inspired the ominous "That Smell" ÿ a cautionary tale about drug abuse that was clearly aimed towards him and at least one other band member. Rossington has admitted repeatedly that he was the "Prince Charming" of the song who crashed his car into an oak tree while drunk and stoned on Quaaludes. Van Zant, at least, was making a serious attempt to clean up his act and curtail the cycle of boozed-up brawling that was part of Skynyrd's reputation.[citation needed]
1977's Street Survivors turned out to be a showcase for guitarist/vocalist Steve Gaines, who had joined the band just a year earlier and was making his studio debut with them. Publicly and privately, Ronnie Van Zant marveled at the multiple talents of Skynyrd's newest member, claiming that the band would "all be in his shadow one day".[22] Gaines' contributions included his co-lead vocal with Van Zant on the co-written "You Got That Right" and the rousing guitar boogie "I Know a Little", which he had written before he joined Skynyrd. So confident was Skynyrd's leader of Gaines' abilities that the album (and some concerts) featured Gaines delivering his self-penned bluesy "Ain't No Good Life" ÿ the only song in the pre-crash Skynyrd catalog to feature a lead vocalist other than Ronnie Van Zant. The album also included the hit singles "What's Your Name" and "That Smell". The band was poised for their biggest tour yet, with shows always highlighted by the iconic rock anthem "Free Bird".[23] In November, the band was scheduled to fulfill Van Zant's lifelong dream of headlining New York's Madison Square Garden.[citation needed]
Following a performance at the Greenville Memorial Auditorium in Greenville, South Carolina, on October 20, 1977, the band boarded a chartered Convair CV-240 bound for Baton Rouge, Louisiana, where they were scheduled to appear at LSU the following night. After running out of fuel they attempted an emergency landing before crashing in a heavily forested area five miles northeast of Gillsburg, Mississippi.[24][25] Ronnie Van Zant and Steve Gaines, along with backup singer Cassie Gaines (Steve's older sister), assistant road manager Dean Kilpatrick, pilot Walter McCreary, and co-pilot William Gray were killed on impact; other band members (Collins, Rossington, Wilkeson, Powell, Pyle, and Hawkins), tour manager Ron Eckerman,[26] and several road crew suffered serious injuries.
The accident came just three days after the release of Street Survivors. Following the crash and the ensuing press, Street Survivors became the band's second platinum album and reached No.?5 on the U.S. album chart. The single "What's Your Name" reached No.?13 on the single charts in 1978. The original cover sleeve for Street Survivors had featured a photograph of the band, particularly Steve Gaines, engulfed in flames. Out of respect for the deceased (and at the request of Teresa Gaines, Steve's widow), MCA Records withdrew the original cover and replaced it with the album's back photo, a similar image of the band against a simple black background.[27] Thirty years later, for the deluxe CD version of Street Survivors, the original "flames" cover was restored.
Lynyrd Skynyrd disbanded after the tragedy, reuniting only on one occasion to perform an instrumental version of "Free Bird" at Charlie Daniels' Volunteer Jam V in January 1979. Collins, Rossington, Powell, and Pyle performed the song with Charlie Daniels and members of his band. Leon Wilkeson, who was still undergoing physical therapy for his badly broken left arm, was in attendance, along with Judy Van Zant, Teresa Gaines, JoJo Billingsley, and Leslie Hawkins.[citation needed]
Rossington, Collins, Wilkeson and Powell formed the Rossington-Collins Band, which released two albums in 1980 and 1981. Deliberately avoiding comparisons with Ronnie Van Zant as well as suggestions that this band was Lynyrd Skynyrd reborn, Rossington and Collins chose a woman, Dale Krantz, as the lead vocalist. However, as an acknowledgement of their past, the band's concert encore would always be an instrumental version of "Free Bird". Rossington and Collins eventually had a falling out over the affections of Dale Krantz, whom Rossington married and with whom he formed the Rossington Band, which released two albums in the late 1980s and opened for the Lynyrd Skynyrd Tribute Tour in 1987ÿ1988.[citation needed]
The other former members of Lynyrd Skynyrd continued to make music during the hiatus era. Billy Powell played keyboards in a Christian rock band named Vision, touring with established Christian rocker Mylon LeFevre. During Vision concerts, Powell's trademark keyboard talent was often spotlighted and he spoke about his conversion to Christianity after the near-fatal plane crash. Pyle formed the Artimus Pyle Band in 1982, which occasionally featured former Honkettes JoJo Billingsley and Leslie Hawkins.[28][citation needed]
In 1980, Allen Collins' wife Kathy died of a massive hemorrhage while miscarrying their third child. He formed the Allen Collins Band in 1983 from the remnants of the Rossington-Collins Band and released a single studio album. He was visibly suffering from Kathy's death; he excessively drank and consumed drugs. On January 29, 1986, Collins, then 33, crashed his Ford Thunderbird into a ditch near his home in Jacksonville, killing his girlfriend Debra Jean Watts and leaving himself permanently paralyzed from the chest down.[29]
In 1987, Lynyrd Skynyrd reunited for a full-scale tour with five major members of the pre-crash band: crash survivors Gary Rossington, Billy Powell, Leon Wilkeson and Artimus Pyle, along with guitarist Ed King, who had left the band two years before the crash. Ronnie Van Zant's younger brother, Johnny, took over as the new lead singer and primary songwriter. Due to founding member Allen Collins' paralysis from his 1986 car accident, he was only able to participate as the musical director, choosing Randall Hall, his former bandmate in the Allen Collins Band, as his stand-in. In return for avoiding prison following his guilty plea to DUI manslaughter Collins would be wheeled out onstage each night to explain to the audience why he could no longer perform (usually before the performance of "That Smell", which had been partially directed at him). Collins was stricken with pneumonia in 1989 and died on January 23, 1990, leaving Rossington as the only remaining founding member.[citation needed]
The reunited band was intended to be a one-time tribute to the original lineup, captured on the double-live album Southern by the Grace of God: Lynyrd Skynyrd Tribute Tour 1987. That the band chose to continue after the 1987 tribute tour caused legal problems for the survivors, as Judy Van Zant Jenness and Teresa Gaines Rapp (widows of Ronnie and Steve, respectively) sued the others for violating an agreement made shortly after the plane crash, stating that they would not "exploit" the Skynyrd name for profit. As part of the settlement, Jenness and Rapp collect nearly 30% of the band's touring revenues (representing the shares their husbands would have earned had they lived), and hold a proviso requiring any band touring as Lynyrd Skynyrd to include Rossington and at least two other four surviving members from the pre-crash era, namely Wilkeson, Powell, King and Pyle[30]. However, continued lineup changes and natural attrition has led to a relaxation in recent years.
The band released its first post-reunion album in 1991, entitled Lynyrd Skynyrd 1991. By that time, the band had added a second drummer, Kurt Custer. Artimus Pyle left the band during the same year, with Custer becoming the band's sole drummer. That lineup released a second post-reunion album, entitled The Last Rebel in 1993. Later that year, Randall Hall was replaced by Mike Estes. The band's third post-reunion album, Endangered Species, featured mostly acoustic instrumentation and remakes of many of the band's classic 1970's songs. On the album, Kurt Custer was replaced by Owen Hale on drums.
Ed King had to take a break from touring in 1996 due to heart complications that required a transplant. In his absence, he was replaced by Hughie Thomasson. The band did not let King rejoin after he recovered.[31] At the same time, Mike Estes was replaced by Rickey Medlocke, who had previously played and recorded with the band for a short time in the early 1970s. The result was a major retooling of the band's 'guitar army.' Medlocke and Thomasson would also become major contributors to the band's songwriting, along with Rossington and Van Zant.
The first album with this new lineup, released in 1997, was entitled Twenty. The band released another album, Edge of Forever in 1999. By that time, Hale had left the band, and the drums on the album were played by session drummer Kenny Aronoff. Michael Cartellone became the band's permanent drummer on the subsequent tour. Despite the growing number of post-reunion albums that band had released up to this time, setlists showed that the band was playing mostly 1970's-era material in concert.
The band released a Christmas album, entitled Christmas Time Again in 2000. Leon Wilkeson, Skynyrd's bassist since 1972, was found dead in his hotel room on July 27, 2001; his death was found to be due to emphysema and chronic liver disease. He was replaced in 2001 by Ean Evans.[32] Wilkeson's death resulted in the band having only two remaining members from the classic pre-crash lineups.
The first album to feature Evans was Vicious Cycle, released in 2003. This album had improved sales over the other post-reunion albums, and had a minor hit single in the song "Red, White and Blue." The band also released a double collection album called Thyrty, which had songs from the original lineup to the present, and also a live DVD of their Vicious Cycle Tour and on June 22, 2004, the album Lynyrd Skynyrd Lyve: The Vicious Cycle Tour.
On December 10, 2004, the band did a show for CMT, Crossroads, a concert featuring country duo Montgomery Gentry and other genres of music. In the beginning of 2005 Hughie Thomasson left the band to reform his Southern Rock band Outlaws. Thomasson died in his sleep on September 9, 2007 of an apparent heart attack in his home in Brooksville, Florida; he was 55 years old.[citation needed]
On February 5, 2005, Lynyrd Skynyrd did a Super Bowl party in Jacksonville with special guests 3 Doors Down, Jo Dee Messina, Charlie Daniels and Ronnie and Johnny Van Zant's brother Donnie Van Zant of 38 Special. On February 13 of that year Lynyrd Skynyrd did a tribute to Southern Rock on the Grammy Awards with Gretchen Wilson, Tim McGraw, Keith Urban and Dickey Betts. In the summer of 2005, lead singer Johnny Van Zant had to have surgery on his vocal cord to have a polyp removed. He was told not to sing for three months. On September 10, 2005, Lynyrd Skynyrd performed without Johnny Van Zant at the Music Relief Concert for the victims of Hurricane Katrina, with Kid Rock standing in for Johnny. In December 2005, Johnny Van Zant returned to sing for Lynyrd Skynyrd. The band performed live at Freedom Hall in Louisville, Kentucky, as a part of their 2007 tour. The concert was recorded in high definition for HDNet and premiered on December 1, 2007.[citation needed]
Mark "Sparky" Matejka, formerly of the country music band Hot Apple Pie, joined Lynyrd Skynyrd in 2006 as Thomasson's replacement. On November 2, 2007, the band performed for a crowd of 50,000 people at the University of Florida's Gator Growl student-run pep rally in Ben Hill Griffin Stadium ("The Swamp" football stadium). This was the largest crowd that Lynyrd Skynyrd had played to in the U.S., until the July 2008 Bama Jam in Enterprise, Alabama where more than 111,000 people attended.[33]
On January 28, 2009, keyboardist Billy Powell died of a suspected heart attack at age 56 at his home near Jacksonville, Florida. No autopsy was carried out. He was replaced by Peter Keys.[34] Powell's death left co-founding member Gary Rossington as the only remaining current member tied to the popular 1970s era lineups of the band, though guitarist Ricky Medlocke did briefly drum for the band in the prefame early 1970's period.
On March 17, 2009, it was announced that Skynyrd had signed a worldwide deal with Roadrunner Records, in association with their label, Loud & Proud Records, and released their new album God & Guns on September 29 of that year. They toured Europe and the U.S. in 2009 with Keys on keyboards and Robert Kearns of the Bottle Rockets on bass; bassist Ean Evans died of cancer at age 48 on May 6, 2009.[32] Scottish rock band Gun performed as special guests for the UK leg of Skynyrd's tour in 2010.[35]
In addition to the tour, Skynyrd appeared at the Sean Hannity Freedom Concert series in late 2010. Hannity had been actively promoting the God & Guns album, frequently playing portions of the track "That Ain't My America" on his radio show. The tour is titled "Rebels and Bandoleros". The band continued to tour throughout 2011, playing alongside ZZ Top and the Doobie Brothers.[36]
On May 2, 2012, the band announced the impending release of a new studio album, Last of a Dyin' Breed, along with a North American and European tour.[37] On August 21, 2012, Last of a Dyin' Breed was released. In celebration, the band did four autograph signings throughout the southeast.[38] Lyrynrd Skynyrd have used a Confederate flag since the 1970s and several criticisms have been raised against them because of this.[39][40] While promoting the album on CNN on September 9, 2012, members of the band talked about its discontinued use of Confederate imagery.[41] In September 2012, the band briefly did not display the Confederate flag, which had for years been a part of their stage show, because they did not want to be associated with racists that adopted the flag. However, after protests from fans, they reversed this decision, citing it as part of their Southern American heritage and states' rights symbolism.[42]
Original drummer Bob Burns died aged 64 on April 3, 2015; his car crashed into a tree while he was driving alone near his home in Cartersville, Georgia.[43] From 2015 through 2017, the band had periods of being sidelined or having to cancel shows due to health problems suffered by founding member Gary Rossington.[44]
On January 25, 2018, Lynyrd Skynyrd announced their Last of the Street Survivors Farewell Tour, which will start on May 4, 2018. Supporting acts include Kid Rock, Hank Williams Jr., Bad Company, The Charlie Daniels Band, The Marshall Tucker Band, .38 Special, Blackberry Smoke, and Blackfoot.[4]
In 2004, Rolling Stone magazine ranked the group No.?95 on their list of the "100 Greatest Artists of All Time".[45][46]
On November 28, 2005, the Rock and Roll Hall of Fame announced that Lynyrd Skynyrd would be inducted alongside Black Sabbath, Blondie, Miles Davis, and the Sex Pistols.[47] They were inducted in the Waldorf Astoria Hotel in Manhattan on March 13, 2006 during the Hall's 21st annual induction ceremony. The inductees included Ronnie Van Zant, Allen Collins, Gary Rossington, Ed King, Steve Gaines, Billy Powell, Leon Wilkeson, Bob Burns, and Artimus Pyle (post-crash members, the Honkettes, and pre-crash members Rickey Medlocke, Larry Junstrom, and Greg T. Walker, were not inducted). The current version of Skynyrd, augmented by King, Pyle, Burns, and former Honkettes JoJo Billingsley and Leslie Hawkins, performed "Sweet Home Alabama" and "Free Bird" at the ceremony, which was also attended by Judy Van Zant Jenness and Ronnie's two daughters, Teresa Gaines Rapp and her daughter Corinna, Allen Collins' daughters, and Leon Wilkeson's mother and son.
On April 4, 2017, a biopic, Street Survivors, has cast the leads and begins shooting at months end.[52]
Original
Later
What teams are in the fa cup final?
Manchester United🚨The 2018 FA Cup Final was the final match of the 2017ÿ18 FA Cup and the 137th final of the FA Cup, the world's oldest football cup competition. It was played at Wembley Stadium in London, England[3] on 19 May 2018 between Manchester United and Chelsea. It was the second successive final for Chelsea following their defeat by Arsenal the previous year.
As winners, Chelsea qualified for the group stage of the 2018ÿ19 UEFA Europa League, although they had qualified for that phase already via their league position.[4][5] Chelsea also earned the right to play 2017ÿ18 Premier League champions Manchester City for the 2018 FA Community Shield.
The teams had met twice before in the FA Cup Final, winning one each between them. The first was in 1994, which Manchester United won 4ÿ0, and most recently in 2007, when Chelsea ÿ then managed by current Manchester United boss Jos Mourinho ÿ won 1ÿ0 after extra time.
On 24 April 2018, it was announced that Michael Oliver would officiate the match. It was notable for being the first Final to use the video assistant referee (VAR) system.[6] As President of the Football Association, Prince William, Duke of Cambridge would normally attend the final, presenting the trophy to the winning captain at the conclusion of the game. In 2018 however, the final was scheduled for the same day as his brother's wedding, for which he was serving as best man.[7] It was announced on 15 May 2018 that the trophy would be presented by Jackie Wilkins, the widow of former Manchester United and Chelsea player Ray Wilkins, who died in April 2018.[8]
In all results below, the score of the finalist is given first.
Chelsea as a Premier League team entered in the third round of the FA Cup against Championship club Norwich City on 7 January at Carrow Road, which ended with a 0ÿ0 draw. The replay was held 11 days later and was a 1ÿ1 draw with goals from Michy Batshuayi in the 55th minute and Jamal Lewis in the 90th minute, going to extra time and then a penalty shootout that Chelsea won 5ÿ3.[9] In the fourth round, they met Premier League opposition Newcastle United at Stamford Bridge, and won 3ÿ0 with two goals from Batshuayi and a one from Marcos Alonso.[10] In the fifth round, Chelsea faced another Championship team, Hull City, and won 4ÿ0 with Willian (2), Pedro and Olivier Giroud scoring.[11] In the quarter-finals, they visited the King Power Stadium to face Leicester City, where a goal for Chelsea from lvaro Morata and a Leicester equaliser from Jamie Vardy took the game to extra time, in which substitute Pedro came off the bench and scored the winning goal to take Chelsea to the semi-finals at Wembley to face Southampton.[11] In that match, goals from both strikers Giroud and Morata were enough to see off Southampton 2ÿ0 to bring Chelsea back to the FA Cup Final for the second successive season. Chelsea are seeking their eighth FA Cup triumph.[12]
In all results below, the score of the finalist is given first.
Manchester United, also of the Premier League, entered in the third round and were drawn at home against Championship side Derby County and won 2ÿ0 with goals from Jesse Lingard and Romelu Lukaku.[13] In the next round, they were drawn against League Two club Yeovil Town at Huish Park and won 4ÿ0 with goals from Marcus Rashford, Ander Herrera, Jesse Lingard and Lukaku.[14] In the fifth round, Manchester United were drawn with fellow Premier League side Huddersfield Town away at Kirklees Stadium, and Lukaku scored both goals in a 2ÿ0 win.[15] In the quarter-finals, they played Premier League Brighton & Hove Albion at Old Trafford, and progressed due to goals by Lukaku and Nemanja Mati?.[16] In the semi-final at Wembley Stadium against Premier League club Tottenham Hotspur, Manchester United reached their 20th final after coming from behind to win 2ÿ1 with goals from Alexis Snchez and Herrera.[17] Manchester United were looking to match Arsenal's record of 13 FA Cup wins.[18]
Tributes for former Chelsea and Manchester United midfielder Ray Wilkins, who died on 4 April, were held before the match on the hoardings and the screens in the stadium, as well as a feature in the match programme.[19] Wilkins won the FA Cup with both sides, scoring for United in the initial 1983 FA Cup Final tie, and winning it three times as assistant manager of Chelsea ÿ in 2000, 2009 and 2010.[20] The traditional performance of the hymn, "Abide with Me" was performed by the choir of the Royal Air Force and 14 individual fans of 14 different clubs with a flypast by 3 RAF Typhoons.[21] The national anthem, "God Save the Queen" was performed by soprano Emily Haig who also performed it at the 2018 FA Women's Cup Final.[22]
The referee for the match was 33-year-old Michael Oliver from Northumberland. He previously officiated when these two sides met in the previous season's quarter-final, in which he sent off Manchester United's Ander Herrera. His assistants were Ian Hussin from Liverpool and Lee Betts from Norfolk. The fourth official was Lee Mason of Lancashire, and the fifth official was Constantine Hatzidakis from Kent. For the first time in the final, there was a Video Assistant Referee (VAR), Neil Swarbrick of Lancashire. His assistant was Mick McDonough, also of Northumberland.[2]
After 21 minutes, Chelsea's Eden Hazard entered Manchester United's penalty area, where he was fouled by their defender Phil Jones. Hazard took the penalty kick himself, sending it past United goalkeeper David de Gea for the only goal of the game.[23]
In the second half, Manchester United had more opportunities, forcing Chelsea goalkeeper Thibaut Courtois to make saves. Alexis Snchez had a potential equaliser ruled out for offside after VAR confirmed the decision was correct and Paul Pogba missed a header late into the match.[23]
It was the first time that Jos Mourinho lost a cup final in charge of an English team, at the seventh attempt, and the first domestic cup won by Antonio Conte, who lost the 2012 Coppa Italia Final with Juventus and the 2017 FA Cup Final for Chelsea.[23]
BBC Sport correspondent Phil McNulty highlighted the performance of the Chelsea players Hazard, Courtois, N'Golo Kant, Gary Cahill and Antonio Rdiger, while criticising United trio Jones, Snchez and Pogba.[23]
Man of the Match:
Antonio Rdiger (Chelsea)[1]
Assistant referees:[2]
Ian Hussin (Liverpool)
Lee Betts (Norfolk)
Fourth official:[2]
Lee Mason (Lancashire)
Fifth official:[2]
Constantine Hatzidakis (Kent)
Video assistant referee:[2]
Neil Swarbrick (Lancashire)
Assistant video assistant referee:[2]
Mick McDonough (Northumberland)
Match rules[24]
When did north carolina officially become a state?
November 21, 1789🚨
Who is the first lady governer of india?
Sarojini Naidu🚨In India, a governor is the constitutional head of each of the twenty-nine states. The governor is appointed by the President of India for a term of five years, and holds office at the President's pleasure. The governor is de jure head of the state government; all its executive actions are taken in the governor's name. However, the governor must act on the advice of the popularly elected council of ministers, headed by the chief minister, who thus hold de facto executive authority at the state-level. The Constitution of India also empowers the governor to act upon his or her own discretion, such as the ability to appoint or dismiss a ministry, recommend President's rule, or reserve bills for the President's assent. Over the years, the exercise of these discretionary powers have given rise to conflict between the elected chief minister and the central governmentÿappointed governor.[1] The union territories of Andaman and Nicobar, Delhi and Puducherry are headed by lieutenant-governors.
Sarojini Naidu was the first female to become the governor of an Indian state.[2] She governed Uttar Pradesh from 15 August 1947 to 2 March 1949. Her daughter, Padmaja Naidu, is the longest-serving governor with 11-year tenure in West Bengal.[3] Draupadi Murmu was recently appointed as the first woman governor of Jharkhand.[4]
Where is the gulf stream ocean current located?
originates in the Gulf of Mexico and stretches to the tip of Florida, and follows the eastern coastlines of the United States and Newfoundland before crossing the Atlantic Ocean🚨
The Gulf Stream, together with its northern extension the North Atlantic Drift, is a warm and swift Atlantic ocean current that originates in the Gulf of Mexico and stretches to the tip of Florida, and follows the eastern coastlines of the United States and Newfoundland before crossing the Atlantic Ocean. The process of western intensification causes the Gulf Stream to be a northward accelerating current off the east coast of North America. At about 400N 300W? / ?40.000N 30.000W? / 40.000; -30.000, it splits in two, with the northern stream, the North Atlantic Drift, crossing to Northern Europe and the southern stream, the Canary Current, recirculating off West Africa.
The Gulf Stream influences the climate of the east coast of North America from Florida to Newfoundland, and the west coast of Europe. Although there has been recent debate, there is consensus that the climate of Western Europe and Northern Europe is warmer than it would otherwise be due to the North Atlantic drift which is the northeastern section of the Gulf Stream. It is part of the North Atlantic Gyre. Its presence has led to the development of strong cyclones of all types, both within the atmosphere and within the ocean. The Gulf Stream is also a significant potential source of renewable power generation.[citation needed] The Gulf Stream may be slowing down as a result of climate change.
The Gulf Stream is typically 100 kilometres (62?mi) wide and 800 metres (2,600?ft) to 1,200 metres (3,900?ft) deep. The current velocity is fastest near the surface, with the maximum speed typically about 2.5 metres per second (5.6?mph).
European discovery of the Gulf Stream dates to the 1512 expedition of Juan Ponce de Le܇n, after which it became widely used by Spanish ships sailing from the Caribbean to Spain.[1] A summary of Ponce de Le܇n's voyage log, on April 22, 1513, noted, "A current such that, although they had great wind, they could not proceed forward, but backward and it seems that they were proceeding well; at the end it was known that the current was more powerful than the wind."[2] Its existence was also known to Peter Martyr d'Anghiera.
Benjamin Franklin became interested in the North Atlantic Ocean circulation patterns. In 1768, while in England, Franklin heard a curious complaint from the Colonial Board of Customs: Why did it take British packets several weeks longer to reach New York from England than it took an average American merchant ship to reach Newport, Rhode Island, despite the merchant ships leaving from London and having to sail down the River Thames and then the length of the English Channel before they sailed across the Atlantic, while the packets left from Falmouth in Cornwall?[3]
Franklin asked Timothy Folger, his cousin twice removed (Nantucket Historical Society), a Nantucket Island whaling captain, for an answer. Folger explained that merchant ships routinely crossed the then-unnamed Gulf Streamidentifying it by whale behavior, measurement of the water's temperature, and changes in the water's colorwhile the mail packet captains ran against it.[3] Franklin had Folger sketch the path of the Gulf Stream on an old chart of the Atlantic and add written notes on how to avoid the Stream when sailing from England to America. Franklin then forwarded the chart to Anthony Todd, secretary of the British Post Office.[3] Franklin's Gulf Stream chart was printed in 1769 in London, but it was mostly ignored by British sea captains.[4] A copy of the chart was printed in Paris circa 1770-1773, and a third version was published by Franklin in Philadelphia in 1786.[5][6] The inset in the upper left part of the 1786 chart is an illustration of the migration pattern of herring and not an ocean current.
The Gulf Stream proper is a western-intensified current, driven largely by wind stress.[7] The North Atlantic Drift, in contrast, is largely thermohaline circulationÿdriven. In 1958 the oceanographer Henry Stommel noted that "very little water from the Gulf of Mexico is actually in the Stream".[8] By carrying warm water northeast across the Atlantic, it makes Western and especially Northern Europe warmer than it otherwise would be.[9]
However, the extent of its contribution to the actual temperature differential between North America and Europe is a matter of dispute, as there is a recent minority opinion within the science community that this temperature difference (beyond that caused by contrasting maritime and continental climates) is mainly due to atmospheric waves created by the Rocky Mountains.[10]
A river of sea water, called the Atlantic North Equatorial Current, flows westward off the coast of Central Africa. When this current interacts with the northeastern coast of South America, the current forks into two branches. One passes into the Caribbean Sea, while a second, the Antilles Current, flows north and east of the West Indies.[11] These two branches rejoin north of the Straits of Florida.
The trade winds blow westward in the tropics,[12] and the westerlies blow eastward at mid-latitudes.[13] This wind pattern applies a stress to the subtropical ocean surface with negative curl across the north Atlantic Ocean.[14] The resulting Sverdrup transport is equatorward.[15]
Because of conservation of potential vorticity caused by the northward-moving winds on the subtropical ridge's western periphery and the increased relative vorticity of northward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the western boundary current known as the Labrador current.[16] The conservation of potential vorticity also causes bends along the Gulf Stream, which occasionally break off due to a shift in the Gulf Stream's position, forming separate warm and cold eddies.[17] This overall process, known as western intensification, causes currents on the western boundary of an ocean basin, such as the Gulf Stream, to be stronger than those on the eastern boundary.[18]
As a consequence, the resulting Gulf Stream is a strong ocean current. It transports water at a rate of 30 million cubic meters per second (30 sverdrups) through the Florida Straits. As it passes south of Newfoundland, this rate increases to 150 million cubic metres per second.[19] The volume of the Gulf Stream dwarfs all rivers that empty into the Atlantic combined, which barely total 0.6 million cubic metres per second. It is weaker, however, than the Antarctic Circumpolar Current.[20] Given the strength and proximity of the Gulf Stream, beaches along the East Coast of the United States may be more vulnerable to large sea-level anomalies, which significantly impact rates of coastal erosion.[21]
The Gulf Stream is typically 100 kilometres (62?mi) wide and 800 metres (2,600?ft) to 1,200 metres (3,900?ft) deep. The current velocity is fastest near the surface, with the maximum speed typically about 2.5 metres per second (5.6?mph).[22] As it travels north, the warm water transported by the Gulf Stream undergoes evaporative cooling. The cooling is wind-driven: Wind moving over the water causes evaporation, cooling the water and increasing its salinity and density. When sea ice forms, salts are left out of the ice, a process known as brine exclusion.[23] These two processes produce water that is denser and colder (or, more precisely, water that is still liquid at a lower temperature). In the North Atlantic Ocean, the water becomes so dense that it begins to sink down through less salty and less dense water. (The convective action is not unlike that of a lava lamp.) This downdraft of cold, dense water becomes a part of the North Atlantic Deep Water, a southgoing stream.[24] Very little seaweed lies within the current, although seaweed lies in clusters to its east.[25]
In April 2018, two studies published in Nature [26][27] found the Gulf Stream to be at its weakest for at least 1,600 years.[28]
The Gulf Stream is influential on the climate of the Florida peninsula. The portion off the Florida coast, referred to as the Florida current, maintains an average water temperature at or above 24?C (75?F) during the winter.[29] East winds moving over this warm water move warm air from over the Gulf Stream inland,[30] helping to keep temperatures milder across the state than elsewhere across the Southeast during the winter. Also, the Gulf Stream's proximity to Nantucket, Massachusetts adds to its biodiversity, as it is the northern limit for southern varieties of plant life, and the southern limit for northern plant species, Nantucket being warmer during winter than the mainland.[31]
The North Atlantic Current of the Gulf Stream, along with similar warm air currents, helps keep Ireland and the western coast of Great Britain a couple of degrees warmer than the east.[32] However, the difference is most dramatic in the western coastal islands of Scotland.[33] A noticeable effect of the Gulf Stream and the strong westerly winds (driven by the warm water of the Gulf Stream) on Europe occurs along the Norwegian coast.[9] Northern parts of Norway lie close to the Arctic zone, most of which is covered with ice and snow in winter. However, almost all of Norway's coast remains free of ice and snow throughout the year.[34] Weather systems warmed by the Gulf Stream drift into Northern Europe, also warming the climate behind the Scandinavian mountains.
The warm water and temperature contrast along the edge of the Gulf Stream often increase the intensity of cyclones, tropical or otherwise. Tropical cyclone generation normally requires water temperatures in excess of 26.5?C (79.7?F).[35] Tropical cyclone formation is common over the Gulf Stream, especially in the month of July. Storms travel westward through the Caribbean and then either move in a northward direction and curve toward the eastern coast of the United States or stay on a north-westward track and enter the Gulf of Mexico.[36] Such storms have the potential to create strong winds and extensive damage to the United States' Southeast Coastal Areas.
Strong extratropical cyclones have been shown to deepen significantly along a shallow frontal zone, forced by the Gulf Stream itself during the cold season.[37] Subtropical cyclones also tend to generate near the Gulf Stream. 75?percent of such systems documented between 1951 and 2000 formed near this warm water current, with two annual peaks of activity occurring during the months of May and October.[38] Cyclones within the ocean form under the Gulf Stream, extending as deep as 3,500 metres (11,500?ft) beneath the ocean's surface.[39]
The theoretical maximum energy dissipation from Gulf Stream by turbines is in the range of 20-60?GW.[40] One suggestion, which could theoretically supply power comparable to several nuclear power plants, would deploy a field of underwater turbines placed 300 meters (980?ft) under the center of the core of the Gulf Stream.[41] Ocean thermal energy could also be harnessed to produce electricity using the temperature difference between cold deep water and warm surface water.[42]
President of Pennsylvania (1785ÿ1788),
Ambassador to France (1779ÿ1785)
What was the first television show in color?
The Big Record🚨Color television is a television transmission technology that includes information on the color of the picture, so the video image can be displayed in color on the television set. It is an improvement on the earliest television technology, monochrome or black and white television, in which the image is displayed in shades of gray (grayscale). Television broadcasting stations and networks in most parts of the world upgraded from black and white to color transmission in the 1960s and 1970s. The invention of color television standards is an important part of the history of television, and it is described in the technology of television article.
Transmission of color images using mechanical scanners had been conceived as early as the 1880s. A practical demonstration of mechanically-scanned color television was given by John Logie Baird in 1928, but the limitations of a mechanical system were apparent even then. Development of electronic scanning and display made an all-electronic system possible. Early monochrome transmission standards were developed prior to the Second World War, but civilian electronics developments were frozen during much of the war. In August 1944, Baird gave the world's first demonstration of a practical fully electronic color television display. In the United States, commercially competing color standards were developed, finally resulting in the NTSC standard for color that retained compatibility with the prior monochrome system. Although the NTSC color standard was proclaimed in 1953 and limited programming became available, it was not until the early 1970's that color television in North American outsold black and white or monochrome units. Color broadcasting in Europe was not standardized on the PAL format until the 1960s.
Around 2006 countries began to switch from analog color television technology to digital television. This changeover is now complete in many developed countries, but analog television is still the standard in many developing countries.
The human eye's detection system in the retina consists primarily of two types of light detectors, rod cells that capture light, dark, and shapes/figures, and the cone cells that detect color. A typical retina contains 120 million rods and 4.5 million to 6 million cones, which are divided among three groups that are sensitive to red, green, and blue light. This means that the eye has far more resolution in brightness, or "luminance", than in color. However, post-processing in the optic nerve and other portions of the human visual system combine the information from the rods and cones to re-create what appears to be a high-resolution color image.
The eye has limited bandwidth to the rest of the visual system, estimated at just under 8 Mbit/s.[1] This manifests itself in a number of ways, but the most important in terms of producing moving images is the way that a series of still images displayed in quick succession will appear to be continuous smooth motion. This illusion starts to work at about 16 frame/s, and common motion pictures use 24 frame/s. Television, using power from the electrical grid, tunes its rate in order to avoid interference with the alternating current being supplied ÿ in North America, some Central and South American countries, Taiwan, Korea, part of Japan, the Philippines, and a few other countries, this is 60 video fields per second to match the 60?Hz power, while in most other countries it is 50 fields per second to match the 50?Hz power.
In its most basic form, a color broadcast can be created by broadcasting three monochrome images, one each in the three colors of red, green, and blue (RGB). When displayed together or in rapid succession, these images will blend together to produce a full-color image as seen by the viewer. One of the great technical challenges of introducing color broadcast television was the desire to conserve bandwidth, potentially three times that of the existing black-and-white standards, and not use an excessive amount of radio spectrum. In the United States, after considerable research, the National Television Systems Committee[2] approved an all-electronic system developed by RCA which encoded the color information separately from the brightness information and greatly reduced the resolution of the color information in order to conserve bandwidth. The brightness image remained compatible with existing black-and-white television sets at slightly reduced resolution, while color televisions could decode the extra information in the signal and produce a limited-resolution color display. The higher resolution black-and-white and lower resolution color images combine in the eye to produce a seemingly high-resolution color image. The NTSC standard represented a major technical achievement.
Experiments in television systems using radio broadcasts date to the 19th century, but it was not until the 20th century that advances in electronics and light detectors made development practical. A key problem was the need to convert a 2D image into a "1D" radio signal; some form of image scanning was needed to make this work. Early systems generally used a device known as a "Nipkow disk", which was a spinning disk with a series of holes punched in it that caused a spot to scan across and down the image. A single photodetector behind the disk captured the image brightness at any given spot, which was converted into a radio signal and broadcast. A similar disk was used at the receiver side, with a light source behind the disk instead of a detector.
A number of such systems were being used experimentally in the 1920s. The best-known was John Logie Baird's, which was actually used for regular public broadcasting in Britain for several years. Indeed, Baird's system was demonstrated to members of the Royal Society in London in 1926 in what is generally recognized as the first demonstration of a true, working television system. In spite of these early successes, all mechanical television systems shared a number of serious problems. Being mechanically driven, perfect synchronization of the sending and receiving discs was not easy to ensure, and irregularities could result in major image distortion. Another problem was that the image was scanned within a small, roughly rectangular area of the disk's surface, so that larger, higher-resolution displays required increasingly unwieldy disks and smaller holes that produced increasingly dim images. Rotating drums bearing small mirrors set at progressively greater angles proved more practical than Nipkow discs for high-resolution mechanical scanning, allowing images of 240 lines and more to be produced, but such delicate, high-precision optical components were not commercially practical for home receivers.[citation needed]
It was clear to a number of developers that a completely electronic scanning system would be superior, and that the scanning could be achieved in a vacuum tube via electrostatic or magnetic means. Converting this concept into a usable system took years of development and several independent advances. The two key advances were Philo Farnsworth's electronic scanning system, and Vladimir Zworykin's Iconoscope camera. The Iconoscope, based on Klmn Tihanyi's early patents, superseded the Farnsworth-system. With these systems, the BBC began regularly scheduled black-and-white television broadcasts in 1936, but these were shut down again with the start of World War II in 1939. In this time thousands of television sets had been sold. The receivers developed for this program, notably those from Pye Ltd., played a key role in the development of radar.
By 22 March 1935, 108-line black-and-white television programs were being broadcast from the Paul Nipkow TV transmitter in Berlin. In 1936, under the guidance of "Minister of Public Enlightenment and Propaganda" Joseph Goebbels, direct transmissions from fifteen mobile units at the Olympic Games in Berlin were transmitted to selected small television houses (Fernsehstuben) in Berlin and Hamburg.
In 1941 the first NTSC meetings produced a single standard for US broadcasts. US television broadcasts began in earnest in the immediate post-war era, and by 1950 there were 6 million televisions in the United States.[3]
The basic idea of using three monochrome images to produce a color image had been experimented with almost as soon as black-and-white televisions had first been built.
Among the earliest published proposals for television was one by Maurice Le Blanc in 1880 for a color system, including the first mentions in television literature of line and frame scanning, although he gave no practical details.[4] Polish inventor Jan Szczepanik patented a color television system in 1897, using a selenium photoelectric cell at the transmitter and an electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his system contained no means of analyzing the spectrum of colors at the transmitting end, and could not have worked as he described it.[5] An Armenian inventor, Hovannes Adamian, also experimented with color television as early as 1907. The first color television project is claimed by him,[6] and was patented in Germany on March 31, 1908, patent 197183, then in Britain, on April 1, 1908, patent 7219,[7] in France (patent 390326) and in Russia in 1910 (patent 17912).[8]
Scottish inventor John Logie Baird demonstrated the world's first color transmission on July 3, 1928, using scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color; and three light sources, controlled by the signal, at the receiving end, with a commutator to alternate their illumination.[9] The demonstration was of a young girl wearing different colored hats. Noele Gordon went on to become a successful TV actress, famous for the soap opera Crossroads. Baird also made the world's first color broadcast on February 4, 1938, sending a mechanically scanned 120-line image from Baird's Crystal Palace studios to a projection screen at London's Dominion Theatre.[10]
Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929 using three complete systems of photoelectric cells, amplifiers, glow-tubes, and color filters, with a series of mirrors to superimpose the red, green, and blue images into one full color image.
As was the case with black-and-white television, an electronic means of scanning would be superior to the mechanical systems like Baird's. The obvious solution on the broadcast end would be to use three conventional Iconoscopes with colored filters in front of them to produce an RGB signal. Using three separate tubes each looking at the same scene would produce slight differences in parallax between the frames, so in practice a single lens was used with a mirror or prism system to separate the colors for the separate tubes. Each tube captured a complete frame and the signal was converted into radio in a fashion essentially identical to the existing black-and-white systems.
The problem with this approach was there was no simple way to recombine them on the receiver end. If each image was sent at the same time on different frequencies, the images would have to be "stacked" somehow on the display, in real time. The simplest way to do this would be to reverse the system used in the camera; arrange three separate black-and-white displays behind colored filters and then optically combine their images using mirrors or prisms onto a suitable screen, like frosted glass. RCA built just such a system in order to present the first electronically scanned color television demonstration on February 5, 1940, privately shown to members of the US Federal Communications Commission at the RCA plant in Camden, New Jersey.[11] This system, however, suffered from the twin problems of costing at least three times as much as a conventional black-and-white set, as well as having very dim pictures, the result of the fairly low illumination given off by tubes of the era. Projection systems of this sort would become common decades later, however, with improvements in technology.
Another solution would be to use a single screen, but break it up into a pattern of closely spaced colored phosphors instead of an even coating of white. Three receivers would be used, each sending its output to a separate electron gun, aimed at its colored phosphor. Although obvious, this solution was not practical. The electron guns used in monochrome televisions had limited resolution, and if one wanted to retain the resolution of existing monochrome displays, the guns would have to focus on individual dots three times smaller. This was beyond the state of the art at the time.
Instead, a number of hybrid solutions were developed that combined a conventional monochrome display with a colored disk or mirror. In these systems the three colored images were sent one after each other, in either complete frames in the "field-sequential color system", or for each line in the "line-sequential" system. In both cases a colored filter was rotated in front of the display in sync with the broadcast. Since three separate images were being sent in sequence, if they used existing monochrome radio signaling standards they would have an effective refresh rate of only 20 fields, or 10 frames, a second, well into the region where flicker would become visible. In order to avoid this, these systems increased the frame rate considerably, making the signal incompatible with existing monochrome standards.
The first practical example of this sort of system was again pioneered by John Logie Baird. In 1940 he publicly demonstrated a color television combining a traditional black-and-white display with a rotating colored disk. This device was very "deep", but was later improved with a mirror folding the light path into an entirely practical device resembling a large conventional console.[12] However, Baird was not happy with the design, and as early as 1944 had commented to a British government committee that a fully electronic device would be better.
In 1939, Hungarian engineer Peter Carl Goldmark introduced an electro-mechanical system while at CBS, which contained an Iconoscope sensor. The CBS field-sequential color system was partly mechanical, with a disc made of red, blue, and green filters spinning inside the television camera at 1,200 rpm, and a similar disc spinning in synchronization in front of the cathode ray tube inside the receiver set.[13] The system was first demonstrated to the Federal Communications Commission (FCC) on August 29, 1940, and shown to the press on September 4.[14][15][16][17]
CBS began experimental color field tests using film as early as August 28, 1940, and live cameras by November 12.[18] NBC (owned by RCA) made its first field test of color television on February 20, 1941. CBS began daily color field tests on June 1, 1941.[19] These color systems were not compatible with existing black-and-white television sets, and as no color television sets were available to the public at this time, viewing of the color field tests was restricted to RCA and CBS engineers and the invited press. The War Production Board halted the manufacture of television and radio equipment for civilian use from April 22, 1942 to August 20, 1945, limiting any opportunity to introduce color television to the general public.[20][21]
As early as 1940, Baird had started work on a fully electronic system he called the "Telechrome". Early Telechrome devices used two electron guns aimed at either side of a phosphor plate. The phosphor was patterned so the electrons from the guns only fell on one side of the patterning or the other. Using cyan and magenta phosphors, a reasonable limited-color image could be obtained. He also demonstrated the same system using monochrome signals to produce a 3D image (called "stereoscopic" at the time). Baird's demonstration on August 16, 1944 was the first example of a practical color television system.[22] Work on the Telechrome continued and plans were made to introduce a three-gun version for full color. However, Baird's untimely death in 1946 ended development of the Telechrome system.[23][24]
Similar concepts were common through the 1940s and 50s, differing primarily in the way they re-combined the colors generated by the three guns. The Geer tube was similar to Baird's concept, but used small pyramids with the phosphors deposited on their outside faces, instead of Baird's 3D patterning on a flat surface. The Penetron used three layers of phosphor on top of each other and increased the power of the beam to reach the upper layers when drawing those colors. The Chromatron used a set of focusing wires to select the colored phosphors arranged in vertical stripes on the tube.
In the immediate post-war era the Federal Communications Commission (FCC) was inundated with requests to set up new television stations. Worrying about congestion of the limited number of channels available, the FCC put a moratorium on all new licenses in 1948 while considering the problem. A solution was immediately forthcoming; rapid development of radio receiver electronics during the war had opened a wide band of higher frequencies to practical use, and the FCC set aside a large section of these new UHF bands for television broadcast. At the time, black and white television broadcasting was still in its infancy in the U.S., and the FCC started to look at ways of using this newly available bandwidth for color broadcasts. Since no existing television would be able to tune in these stations, they were free to pick an incompatible system and allow the older VHF channels to die off over time.
The FCC called for technical demonstrations of color systems in 1948, and the Joint Technical Advisory Committee (JTAC) was formed to study them. CBS displayed improved versions of its original design, now using a single 6?MHz channel (like the existing black-and-white signals) at 144 fields per second and 405 lines of resolution. Color Television Inc. demonstrated its line-sequential system, while Philco demonstrated a dot-sequential system based on its beam-index tube-based "Apple" tube technology. Of the entrants, the CBS system was by far the best-developed, and won head-to-head testing every time.
While the meetings were taking place it was widely known within the industry that RCA was working on a dot-sequential system that was compatible with existing black-and-white broadcasts, but RCA declined to demonstrate it during the first series of meetings. Just before the JTAC presented its findings, on August 25, 1949, RCA broke its silence and introduced its system as well. The JTAC still recommended the CBS system, and after the resolution of an ensuing RCA lawsuit, color broadcasts using the CBS system started on June 25, 1951. By this point the market had changed dramatically; when color was first being considered in 1948 there were fewer than a million television sets in the U.S., but by 1951 there were well over 10 million. The idea that the VHF band could be allowed to "die" was no longer practical.
During its campaign for FCC approval, CBS gave the first demonstrations of color television to the general public, showing an hour of color programs daily Mondays through Saturdays, beginning January 12, 1950, and running for the remainder of the month, over WOIC in Washington, D.C., where the programs could be viewed on eight 16-inch color receivers in a public building.[25] Due to high public demand, the broadcasts were resumed February 13ÿ21, with several evening programs added.[26] CBS initiated a limited schedule of color broadcasts from its New York station WCBS-TV Mondays to Saturdays beginning November 14, 1950, making ten color receivers available for the viewing public.[27][28] All were broadcast using the single color camera that CBS owned.[29] The New York broadcasts were extended by coaxial cable to Philadelphia's WCAU-TV beginning December 13,[30] and to Chicago on January 10,[31][32] making them the first network color broadcasts.
After a series of hearings beginning in September 1949, the FCC found the RCA and CTI systems fraught with technical problems, inaccurate color reproduction, and expensive equipment, and so formally approved the CBS system as the U.S. color broadcasting standard on October 11, 1950. An unsuccessful lawsuit by RCA delayed the first commercial network broadcast in color until June 25, 1951, when a musical variety special titled simply Premiere was shown over a network of five East Coast CBS affiliates.[33] Viewing was again restricted: the program could not be seen on black-and-white sets, and Variety estimated that only thirty prototype color receivers were available in the New York area.[34] Regular color broadcasts began that same week with the daytime series The World Is Yours and Modern Homemakers.
While the CBS color broadcasting schedule gradually expanded to twelve hours per week (but never into prime time),[35] and the color network expanded to eleven affiliates as far west as Chicago,[36] its commercial success was doomed by the lack of color receivers necessary to watch the programs, the refusal of television manufacturers to create adapter mechanisms for their existing black-and-white sets,[37] and the unwillingness of advertisers to sponsor broadcasts seen by almost no one. CBS had bought a television manufacturer in April,[38] and in September 1951, production began on the only CBS-Columbia color television model, with the first color sets reaching retail stores on September 28.[39][40] But it was too little, too late. Only 200 sets had been shipped, and only 100 sold, when CBS discontinued its color television system on October 20, 1951, ostensibly by request of the National Production Authority for the duration of the Korean War, and bought back all the CBS color sets it could to prevent lawsuits by disappointed customers.[41][42] RCA chairman David Sarnoff later charged that the NPA's order had come "out of a situation artificially created by one company to solve its own perplexing problems" because CBS had been unsuccessful in its color venture.[43]
While the FCC was holding its JTAC meetings, development was taking place on a number of systems allowing true simultaneous color broadcasts, "dot-sequential color systems". Unlike the hybrid systems, dot-sequential televisions used a signal very similar to existing black-and-white broadcasts, with the intensity of every dot on the screen being sent in succession.
In 1938 Georges Valensi demonstrated an encoding scheme that would allow color broadcasts to be encoded so they could be picked up on existing black-and-white sets as well. In his system the output of the three camera tubes were re-combined to produce a single "luminance" value that was very similar to a monochrome signal and could be broadcast on the existing VHF frequencies. The color information was encoded in a separate "chrominance" signal, consisting of two separate signals, the original blue signal minus the luminance (B'ÿY'), and red-luma (R'ÿY'). These signals could then be broadcast separately on a different frequency; a monochrome set would tune in only the luminance signal on the VHF band, while color televisions would tune in both the luminance and chrominance on two different frequencies, and apply the reverse transforms to retrieve the original RGB signal. The downside to this approach is that it required a major boost in bandwidth use, something the FCC was interested in avoiding.
RCA used Valensi's concept as the basis of all of its developments, believing it to be the only proper solution to the broadcast problem. However, RCA's early sets using mirrors and other projection systems all suffered from image and color quality problems, and were easily bested by CBS's hybrid system. But solutions to these problems were in the pipeline, and RCA in particular was investing massive sums (later estimated at $100 million) to develop a usable dot-sequential tube. RCA was beaten to the punch by the Geer tube, which used three B&W tubes aimed at different faces of colored pyramids to produce a color image. All-electronic systems included the Chromatron, Penetron and beam-index tube that were being developed by various companies. While investigating all of these, RCA's teams quickly started focusing on the shadow mask system.
In July 1938 the shadow mask color television was patented by Werner Flechsig (1900ÿ1981) in Germany, and was demonstrated at the International radio exhibition Berlin in 1939. Most CRT color televisions used today are based on this technology. His solution to the problem of focusing the electron guns on the tiny colored dots was one of brute-force; a metal sheet with holes punched in it allowed the beams to reach the screen only when they were properly aligned over the dots. Three separate guns were aimed at the holes from slightly different angles, and when their beams passed through the holes the angles caused them to separate again and hit the individual spots a short distance away on the back of the screen. The downside to this approach was that the mask cut off the vast majority of the beam energy, allowing it to hit the screen only 15% of the time, requiring a massive increase in beam power to produce acceptable image brightness.
In spite of these problems in both the broadcast and display systems, RCA pressed ahead with development and was ready for a second assault on the standards by 1950.
The possibility of a compatible color broadcast system was so compelling that the NTSC decided to re-form, and held a second series of meetings starting in January 1950. Having only recently selected the CBS system, the FCC heavily opposed the NTSC's efforts. One of the FCC Commissioners, R. F. Jones, went so far as to assert that the engineers testifying in favor of a compatible system were "in a conspiracy against the public interest".
Unlike the FCC approach where a standard was simply selected from the existing candidates, the NTSC would produce a board that was considerably more pro-active in development.
Starting before CBS color even got on the air, the U.S. television industry, represented by the National Television System Committee, worked in 1950ÿ1953 to develop a color system that was compatible with existing black-and-white sets and would pass FCC quality standards, with RCA developing the hardware elements. ("Compatible color," a phrase from advertisements for early sets, appears in the song "America" of West Side Story, 1957.) RCA first made publicly announced field tests of the dot sequential color system over its New York station WNBT in July 1951.[44] When CBS testified before Congress in March 1953 that it had no further plans for its own color system,[45] the National Production Authority dropped its ban on the manufacture of color television receivers,[46] and the path was open for the NTSC to submit its petition for FCC approval in July 1953, which was granted on December 17.[47] The first publicly announced network demonstration of a program using the NTSC "compatible color" system was an episode of NBC's Kukla, Fran and Ollie on August 30, 1953, although it was viewable in color only at the network's headquarters.[48] The first network broadcast to go out over the air in NTSC color was a performance of the opera Carmen on October 31, 1953.[49][50]
Color broadcasts from the United States were available to Canadian population centers near the border since the mid-1950s.[51] At the time that NTSC color broadcasting was officially introduced into Canada in 1966, less than one percent of Canadian households had a color television set.[51] Color television in Canada was launched on the Canadian Broadcasting Corporation's (CBC) English language TV service on September 1, 1966.[51] Private television broadcaster CTV also started colorcasts in early September 1966.[52] Full-time color transmissions started in 1974 on the CBC, with other private sector broadcasters doing so by the end of the 1970s.[51]
Cuba in 1958 became the second country in the world to introduce color television broadcasting, with Havana's Channel 12 using standards established by the NTSC Committee of United States Federal Communications Commission in 1940, and American technology patented by the American electronics company RCA, or Radio Corporation of America. But the color transmissions ended when broadcasting stations were seized in the Cuban Revolution in 1959, and did not return until 1975, using equipment acquired from Japan's NEC Corporation, and SECAM equipment from the Soviet Union, adapted for the American NTSC standard.[53]
Guillermo Gonzlez Camarena independently invented and developed a field-sequential tricolor disk system in Mxico in the late 1930s, for which he requested a patent in Mxico on August 19 of 1940 and in the USA in 1941.[54] Gonzlez Camarena produced his color television system in his laboratory Gon-Cam for the Mexican market and exported it to the Columbia College of Chicago, who regarded it as the best system in the world.[55][56] Goldmark actually applied patent for the same field-sequential tricolor system in USA on September 7 of 1940;[13] while Gonzlez Camarena made his application in Mxico nineteen days before, on August 19 of 1940.
On August 31, 1946 Gonzlez Camarena sent his first color transmission from his lab in the offices of The Mexican League of Radio Experiments at Lucerna St. No. 1, in Mexico City. The video signal was transmitted at a frequency of 115?MHz. and the audio in the 40 metre band. He obtained authorization to make the first publicly announced color broadcast in Mexico, on February 8, 1963, of the program Paraso Infantil on Mexico City's XHGC-TV, using the NTSC system which had by now been adopted as the standard for color programming.
Gonzlez Camarena also invented the Simplified Mexican Color TV system as a much more simpler and cheaper alternative to the NTSC system.[57] Due to its simplicity, NASA used a modified version of the Simplified Mexican Color TV system in his Voyager mission of 1979, to take pictures and video of Jupiter.[58]
Although all-electronic color was introduced in the U.S. in 1953,[59] high prices and the scarcity of color programming greatly slowed its acceptance in the marketplace. The first national color broadcast (the 1954 Tournament of Roses Parade) occurred on January 1, 1954, but over the next dozen years most network broadcasts, and nearly all local programming, continued to be in black-and-white. In 1956 NBC's The Perry Como Show became the first live network television series to present a majority of episodes in color. CBS's The Big Record, starring pop vocalist Patti Page, was the first television show broadcast in color for the entire 1957-1958 season; its production costs were greater than most movies were at the time not only because of all the stars featured on the hour-long extravaganza but the extremely high-intensity lighting and electronics required for the new RCA TK-41 cameras. It was not until the mid-1960s that color sets started selling in large numbers, due in part to the color transition of 1965 in which it was announced that over half of all network prime-time programming would be broadcast in color that autumn. The first all-color prime-time season came just one year later.
NBC made the first coast-to-coast color broadcast when it telecast the Tournament of Roses Parade on January 1, 1954, with public demonstrations given across the United States on prototype color receivers by manufacturers RCA, General Electric, Philco, Raytheon, Hallicrafters, Hoffman, Pacific Mercury, and others.[60] A color model from Westinghouse H840CK15 ($1,295, or equivalent to $11,549 in 2016) became available in the New York area on February 28, 1954 and is generally agreed to be the first production receiver using NTSC color offered to the public;[61] a less expensive color model from RCA (CT-100) reached dealers in April 1954.[62] Television's first prime time network color series was The Marriage, a situation comedy broadcast live by NBC in the summer of 1954.[63] NBC's anthology series Ford Theatre became the first network color filmed series that October.[64]
Early color telecasts could be preserved only on the black-and-white kinescope process introduced in 1947. It was not until September 1956 that NBC began using color film to time-delay and preserve some of its live color telecasts.[65] Ampex introduced a color videotape recorder in 1958, which NBC used to tape An Evening With Fred Astaire, the oldest surviving network color videotape. This system was also used to unveil a demonstration of color television for the press. On May 22, 1958, President Dwight D. Eisenhower visited the WRC-TV NBC studios in Washington, D.C. and gave a speech touting the new technology's merits. His speech was recorded in color, and a copy of this videotape was given to the Library of Congress for posterity.
Several syndicated shows had episodes filmed in color during the 1950s, including The Cisco Kid, The Lone Ranger, My Friend Flicka, and Adventures of Superman. The first two were carried by some stations equipped for color telecasts well before NBC began its regular weekly color dramas in 1959, beginning with the Western series Bonanza.
NBC was at the forefront of color programming because its parent company RCA manufactured the most successful line of color sets in the 1950s, and by 1959 RCA was the only remaining major manufacturer of color sets.[66] CBS and ABC, which were not affiliated with set manufacturers and were not eager to promote their competitor's product, dragged their feet into color.[67][68] CBS broadcast color specials and sometimes aired its big weekly variety shows in color, but it offered no regularly scheduled color programming until the fall of 1965. At least one CBS show, The Lucy Show, was filmed in color beginning in 1963 but continued to be telecast in black and white through the end of the 1964ÿ65 season. ABC delayed its first color programs until 1962, but these were initially only broadcasts of the cartoon shows The Flintstones, The Jetsons and Beany and Cecil.[69] The DuMont network, although it did have a television-manufacturing parent company, was in financial decline by 1954 and was dissolved two years later.[70]
The relatively small amount of network color programming, combined with the high cost of color television sets, meant that as late as 1964 only 3.1 percent of television households in the U.S. had a color set. But by the mid-1960s, the subject of color programming turned into a ratings war. A 1965 ARB study that proposed an emerging trend in color television set sales convinced NBC that a full shift to color would gain a ratings advantage over its two competitors.[71] As a result, NBC provided the catalyst for rapid color expansion by announcing that its prime time schedule for fall 1965 would be almost entirely in color.[72] ABC and CBS followed suit and over half of their combined prime-time programming also was in color that season, but they were still reluctant to telecast all their programming in color due to production costs.[71] All three broadcast networks were airing full color prime time schedules by the 1966ÿ67 broadcast season, and ABC aired its last new black-and-white daytime programming in December 1967.[73] Public broadcasting networks like NET, however, did not use color for a majority of their programming until 1968. The number of color television sets sold in the U.S. did not exceed black-and-white sales until 1972, which was also the first year that more than fifty percent of television households in the U.S. had a color set.[74] This was also the year that "in color" notices before color television programs ended[citation needed], due to the rise in color television set sales, and color programming having become the norm.
In a display of foresight, Disney had filmed many of its earlier shows in color so they were able to be repeated on NBC, and since most of Disney's feature-length films were also made in color, they could now also be telecast in that format. To emphasize the new feature, the series was re-dubbed Walt Disney's Wonderful World of Color, which premiered in September 1961, and retained that moniker until 1969.[75]
By the mid-1970s the only stations broadcasting in black-and-white were a few high-numbered UHF stations in small markets, and a handful of low-power repeater stations in even smaller markets such as vacation spots. By 1979, even the last of these had converted to color and by the early 1980s, B&W sets had been pushed into niche markets, notably low-power uses, small portable sets, or use as video monitor screens in lower-cost consumer equipment. By the late 1980s, even those areas switched to color sets.
Color broadcasting in Hawaii started in September 1965, and in Alaska a year later.[citation needed] One of the last television stations in North America to convert to color, WQEX (now WINP-TV) in Pittsburgh, started broadcasting in color on October 16, 1986 after its black-and-white transmitter, which dated from the 1950s, broke down in February 1985 and the parts required to fix it were no longer available. The then-owner of WQEX, PBS member station WQED, diverted some of its pledge money into getting a color transmitter for WQEX.
Early color sets were either floor-standing console models or tabletop versions nearly as bulky and heavy, so in practice, they remained firmly anchored in one place. The introduction of GE's relatively compact and lightweight Porta-Color set in the spring of 1966 made watching color television a more flexible and convenient proposition. In 1972, sales of color sets finally surpassed sales of black-and-white sets. Also in 1972, the last holdout among daytime network programs converted to color, resulting in the first completely all-color network season.
The first regular color broadcasts in Europe were by the United Kingdom's BBC2 beginning on July 1, 1967 (PAL). West Germany's first broadcast occurred in August (PAL), followed by the Netherlands in September (PAL), and by France in October (SECAM). Norway, Sweden, Finland, Austria, East Germany, Czechoslovakia, and Hungary all started regular color broadcasts before the end of 1969. Ireland's national TV station RT began using color in 1968 for recorded programmes; the first outside broadcast made in color for RT Television was when Ireland hosted the Eurovision Song Contest in Dublin in 1971.[76] The PAL system spread through most of Western Europe.
More European countries introduced color television using the PAL system in the 1970s and early 1980s; examples include Denmark (1970), Belgium (1971), Yugoslavia/Serbia (1971), Spain (1972, but not fully implemented until 1978), Iceland (1973), Portugal (1976, but not fully implemented until 1980), Albania (1981), and Turkey (1981) and Romania (1983, but not fully implemented until 1990). In Italy there were debates to adopt a national color television system, the ISA, developed by Indesit, but that idea was scrapped. As a result, Italy was one of the last European countries to officially adopt the PAL system in 1977.[77]
France, Luxembourg, and most of the Eastern Bloc along with their overseas territories opted for SECAM. SECAM was a popular choice in countries with a lot of hilly terrain, and technologically backward countries with a very large installed base of monochrome equipment, since the greater ruggedness of the SECAM signal could cope much better with poorly maintained equipment. However, for many countries the decision was more down to politics than technical merit.
A drawback of SECAM for production is that, unlike PAL or NTSC, certain post-production operations of encoded SECAM signals are not really possible without a significant drop in quality. As an example, a simple fade to black is trivial in NTSC and PAL: you just reduce the signal level until it is zero. However, in SECAM the color difference signals, which are frequency modulated, need first to be decoded to e.g. RGB, then the fade-to-black is applied, and finally the resulting signal is re-encoded into SECAM. Because of this, much SECAM video editing was actually done using PAL equipment, then the resultant signal was converted to SECAM. Another drawback of SECAM is that comb filtering, allowing better color separation, is not possible in TV receivers. This was not, however, much of a drawback in the early days of SECAM as such filters did not become readily available in high-end TV sets before the 1990s.
The first regular color broadcasts in SECAM were started on October 1, 1967, on France's Second Channel (ORTF 2e cha?ne). In France and the UK color broadcasts were made on UHF frequencies, the VHF band being used for legacy black and white, 405 lines in UK or 819 lines in France, until the beginning of the 1980s. Countries elsewhere that were already broadcasting 625-line monochrome on VHF and UHF, simply transmitted color programs on the same channels.
Some British television programs, particularly those made by or for ITC Entertainment, were shot on color film before the introduction of color television to the UK, for the purpose of sales to U.S. networks. The first British show to be made in color was the drama series The Adventures of Sir Lancelot (1956ÿ57), which was initially made in black and white but later shot in color for sale to the NBC network in the United States. Other British color television programs include Stingray (1964ÿ1965), which was the first British TV show to be filmed entirely in color, Thunderbirds (1965ÿ1966) and Captain Scarlet and the Mysterons (1967ÿ1968). However, most UK series predominantly made using videotape, such as Doctor Who (1963ÿ89; 2005ÿpresent) did not begin color production until later, with the first color Doctor Who episodes not airing until 1970.
In Japan, NHK and NTV introduced color television, using a variation of the NTSC system (called NTSC-J) on September 10, 1960, making it the first country in Asia to introduce color television. The Philippines (1966) and Taiwan (1969) also adopted the NTSC system.
Other countries in the region instead used the PAL system, starting with Australia (1967, but not fully implemented until 1975), and then Thailand (1969; this country converted from a 525-line system to 625 lines), Hong Kong (1970), the People's Republic of China (1971), New Zealand (1973), North Korea (1974), Singapore (1974), Pakistan (1976, but not fully implemented until 1982), Kazakhstan (1978), Vietnam (1978), Malaysia (1978, but not fully implemented until 1980), Indonesia (1979), India (1979, but not fully implemented until 1982), and Bangladesh (1980). South Korea did not introduce color television (using NTSC) until 1980 (full-time color transmissions began in 1981), although it was already manufacturing color television sets for export. Cambodia was the last country in Asia to introduce color television, officially introduced in 1981 using the PAL system, with full-time color transmissions since 1985.
Nearly all of the countries in the Middle East use PAL. The first country in the Middle East to introduce color television was Iraq in 1967. Saudi Arabia, the United Arab Emirates, Kuwait, Bahrain, and Qatar followed in the mid-1970s, but Israel, Lebanon, and Cyprus continued to broadcast in black and white until the early 1980s. Israeli television even erased the color signals using a device called the mekhikon.
The first color television service in Africa was introduced on the Tanzanian island of Zanzibar, in 1973, using PAL.[78] In 1973 also, MBC of Mauritius broadcast the OCAMM Conference, in color, using SECAM. At the time, South Africa did not have a television service at all, owing to opposition from the apartheid regime, but in 1976, one was finally launched.[79] Nigeria adopted PAL for color transmissions in 1974 in the then Benue Plateau state in the north central region of the country, but countries such as Ghana and Zimbabwe continued with black and white until 1984.[80] The Sierra Leone Broadcasting Service (SLBS) started television broadcasting in 1963 as a cooperation between the SLBS and commercial interests; coverage was extended to all districts in 1978 when the service was also upgraded to color.[81]
In contrast to most other countries in the Americas, which had adopted NTSC, Brazil began broadcasting in color using PAL-M. Its first color transmission was on February 19, 1972. However Ecuador was the first South American country to use NTSC color. Its first color transmission was on November 5, 1974. In 1978, Argentina started broadcasting in color using PAL-N in connection with the country's hosting of the FIFA World Cup. Some countries in South America, including Bolivia, Chile, Paraguay, Peru, and Uruguay, continued to broadcast in black and white until the early 1980s.
Cor Dillen, director and later CEO of the South American branch of Philips, was responsible for bringing color television to South America.[citation needed]
There are three main analog broadcast television systems in use around the world, PAL (Phase Alternating Line), NTSC (National Television System Committee), and SECAM (Squentiel Couleur MmoireSequential Color with Memory).
The system used in The Americas and much of the Far East is NTSC. Western Europe, Australia, Africa, and Eastern South America use PAL (though Brazil uses a hybrid PAL-M system). Eastern Europe and France uses SECAM.[82] Generally, a device (such as a television) can only read or display video encoded to a standard which the device is designed to support; otherwise, the source must be converted (such as when European programs are broadcast in North America or vice versa).
This table illustrates the differences:
Digital television broadcasting standards, such as ATSC, DVB-T, DVB-T2, and ISDB, have superseded these analog transmission standards in many countries.
How much sugar in a packet of sugar?
2 to 4 grams🚨A sugar packet is a delivery method for one 'serving' of sugar. Sugar packets are commonly supplied in restaurants, coffeehouses and tea houses in preference to sugar bowls or sugar dispensers for reasons of neatness, sanitation, spill control, and to some extent portion control.
A typical sugar packet in the United States contains 2 to 4 grams of sugar. Some sugar packets in countries such as Poland contain 5 to 10 grams of sugar. Sugar packet sizes, shapes, and weights differ throughout different areas of the world.
The sugar cube was used in restaurants until it began to be replaced directly after World War II. At this time, machines were made that could produce small packets of sugar for nearly half the cost.
The sugar packet was invented by Benjamin Eisenstadt,[1] the founder of Cumberland Packing or better known today as the Sweet 'N Low company. Eisenstadt had been a tea bag factory worker, and became irritated by the task of refilling and unclogging all the sugar dispensers in his Brooklyn cafeteria across from the Brooklyn Navy Yard. He did not patent the idea, and after discussions with larger sugar companies, lost market share.
Because a gram of any carbohydrate contains 4 nutritional calories (also referred to as "food calories" or kilo-calories)), a typical four gram sugar packet has 16 nutritional calories.
The hobby of collecting sugar packets is called sucrology. Collectors can, for example, focus on the variety of types of sugar or brand names. Sugar packets are also handy forms of advertisement for businesses.
Which actress helped create the voice of et?
Pat Welsh🚨
E.T. the Extra-Terrestrial is a 1982 American science fiction film co-produced and directed by Steven Spielberg, and written by Melissa Mathison. It features special effects by Carlo Rambaldi and Dennis Muren, and stars Henry Thomas, Dee Wallace, Peter Coyote, Robert MacNaughton, Drew Barrymore and Pat Welsh. It tells the story of Elliott (Thomas), a lonely boy who befriends an extraterrestrial, dubbed "E.T.", who is stranded on Earth. Elliott and his siblings help E.T. return to his home planet, while attempting to keep him hidden from their mother and the government.
The concept was based on an imaginary friend Spielberg created after his parents' divorce in 1960. In 1980, Spielberg met Mathison and developed a new story from the stalled sci-fi horror film project Night Skies. It was filmed from September to December 1981 in California on a budget of $10.5?million USD. Unlike most films, it was shot in rough chronological order, to facilitate convincing emotional performances from the young cast.
Released on June 11, 1982, by Universal Pictures, E.T. was an immediate blockbuster, surpassing Star Wars to become the highest-grossing film of all timea record it held for eleven years until Jurassic Park, another Spielberg-directed film, surpassed it in 1993. It is the highest-grossing film of the 1980s.
Considered one of the greatest films ever made,[4][5][6] it was widely acclaimed by critics as a timeless story of friendship, and it ranks as the greatest science fiction film ever made in a Rotten Tomatoes survey. In 1994, it was selected for preservation in the United States National Film Registry as being "culturally, historically, or aesthetically significant". It was re-released in 1985, and then again in 2002, to celebrate its 20th anniversary, with altered shots and additional scenes.
While visiting Earth in a California forest at night, a group of alien botanists land in a spacecraft. When government agents appear on the scene, the aliens flee in their spaceship, leaving one of their own behind in their haste. At a suburban home in the San Fernando Valley, a ten-year-old boy named Elliott is spending time with his brother, Michael, and his friends. As he returns from picking up a pizza, he discovers that something is hiding in their tool shed. The alien promptly flees upon being discovered. Despite his family's disbelief, Elliott leaves Reese's Pieces candy to lure the alien to his house. Before going to sleep, Elliott realizes it is imitating his movements. He feigns illness the next morning to stay home from school and play with it. Later that day, Michael and their five-year-old sister, Gertie, meet it. They decide to keep it hidden from their mother, Mary. When they ask it about its origin, it levitates several balls to represent its planetary system and then demonstrates its powers by reviving dead chrysanthemums.
At school the next day, Elliott begins to experience a psychic connection with the alien, including exhibiting signs of intoxication (because it is at his home, drinking beer), and he begins freeing all the frogs in his biology class. As the alien watches John Wayne kiss Maureen O'Hara in The Quiet Man on television, Elliott then kisses a girl he likes in the same manner and he is sent to the principal's office.
The alien learns to speak English by repeating what Gertie says as she watches Sesame Street and, at Elliott's urging, dubs itself "E.T." E.T. reads a comic strip where Buck Rogers, stranded, calls for help by building a makeshift communication device and is inspired to try it himself. E.T. receives Elliott's help in building a device to "phone home" by using a Speak & Spell toy. Michael notices that E.T.'s health is declining and that Elliott is referring to himself as "we".
On Halloween, Michael and Elliott dress E.T. as a ghost so they can sneak him out of the house. That night, Elliott and E.T. head through the forest, where they make a successful call home. The next day, Elliott wakes up in the field, only to find E.T. gone. Elliott returns home to his distressed family. Michael searches for and finds E.T. dying next to a culvert, being investigated by a raccoon. Michael takes E.T. home to Elliott, who is also dying. Mary becomes frightened when she discovers her son's illness and the dying alien, just as government agents invade the house. Scientists set up a hospital at the house, questioning Michael, Mary and Gertie while treating Elliott and E.T. Their connection disappears and E.T. then appears to die while Elliott recovers. A grief-stricken Elliott is left alone with the motionless E.T. when he notices a dead chrysanthemum, the plant E.T. had previously revived, coming back to life. E.T. reanimates and reveals that his people are returning. Elliott and Michael steal a van that E.T. had been loaded into and a chase ensues, with Michael's friends joining them as they attempt to evade the authorities by bicycles. Suddenly facing a police roadblock, they escape as E.T. uses telekinesis to lift them into the air and toward the forest, like he had done for Elliott before.
Standing near the spaceship, E.T.'s heart glows as he prepares to return home. Mary, Gertie, and "Keys", a friendly government agent, show up. E.T. says goodbye to Michael and Gertie, as she presents him with the chrysanthemum that he had revived. Before boarding the spaceship, he tells Elliott "I'll be right here", pointing his glowing finger to Elliott's forehead. He then picks up the chrysanthemum, boards the spaceship, and it takes off, leaving a rainbow in the sky as everyone watches it leave.
After his parents' divorce in 1960, Spielberg filled the void with an imaginary alien companion. He said that the imaginary alien was "a friend who could be the brother [he] never had and a father that [he] didn't feel [he] had anymore".[7] During 1978, he announced he would shoot a film entitled Growing Up, which he would film in 28 days. The project was set aside because of delays on 1941, but the concept of making a small autobiographical film about childhood would stay with him.[8] He also thought about a follow-up to Close Encounters of the Third Kind, and began to develop a darker project he had planned with John Sayles called Night Skies in which malevolent aliens terrorize a family.[8]
Filming Raiders of the Lost Ark in Tunisia left Spielberg bored, and memories of his childhood creation resurfaced.[9] He told screenwriter Melissa Mathison about Night Skies, and developed a subplot from the failed project, in which Buddy, the only friendly alien, befriends an autistic child. His abandonment on Earth in the script's final scene inspired the E.T. concept.[9] She wrote a first draft titled E.T. and Me in eight weeks,[9] which he considered perfect.[10] The script went through two more drafts, which deleted an "Eddie Haskell"ÿesque friend of Elliott. The chase sequence was also created, and he also suggested having the scene where E.T. got drunk.[8]
In early summer 1981, while Raiders of the Lost Ark was being promoted, Columbia Pictures met with Spielberg to discuss the script, after having to develop Night Skies with the director as the intended sequel to Close Encounters of the Third Kind. However, the head of Columbia Pictures' marketing and research development, Marvin Atonowsky, concluded that it had a limited commercial potential, believing that it would appeal to mostly young kids.[11] The President of Columbia's worldwide productions, John Veitch, also felt that the script was not good or scary enough to draw enough crowd. On the advice of Atonowsky and Veitch, Columbia Pictures CEO Frank Price passed on the project, calling it "a wimpy Walt Disney movie" and thus putting it in a turnaround, so Spielberg approached the more receptive Sid Sheinberg, president of MCA, the then-parent company of Universal Studios.[12][11] Spielberg told Sheinberg to acquire the E.T. script from Columbia Pictures, which he did for $1?million and struck a deal with Price in which Columbia would retain 5% of the film's net profits. Veitch later recalled that "I think [in 1982] we made more on that picture than we did on any of our films."[11]
Carlo Rambaldi, who designed the aliens for Close Encounters of the Third Kind, was hired to design the animatronics of E.T. Rambaldi's own painting Women of Delta led him to give the creature a unique, extendable neck.[10] Its face was inspired by those of Carl Sandburg, Albert Einstein and Ernest Hemingway.[13] Producer Kathleen Kennedy visited the Jules Stein Eye Institute to study real and glass eyes. She hired Institute staffers to create E.T.'s eyes, which she felt were particularly important in engaging the audience.[14] Four heads were created for filming, one as the main animatronic and the others for facial expressions, as well as a costume.[13] Two dwarfs, Tamara De Treaux and Pat Bilon,[8] as well as 12-year-old Matthew DeMeritt, who was born without legs,[15] took turns wearing the costume, depending on what scene was being filmed. DeMeritt actually walked on his hands and played all scenes where he walked awkwardly or fell over. The head was placed above that of the actors, and the actors could see through slits in its chest.[10] Caprice Roth, a professional mime, filled prosthetics to play E.T.'s hands.[14] The puppet was created in three months at the cost of $1.5?million.[16] Spielberg declared it was "something that only a mother could love".[10] Mars, Incorporated refused to allow M&M's to be used in the film, believing E.T. would frighten children. After Mars said "No", The Hershey Company was asked if Reese's Pieces could be used, and it agreed; this product placement resulted in a large increase in Reese's Pieces sales.[17] Science and technology educator Henry Feinberg created E.T.'s communicator device.[18][19]
Having worked with Cary Guffey on Close Encounters of the Third Kind, Spielberg felt confident in working with a cast composed mostly of child actors.[14] For the role of Elliott, he auditioned hundreds of boys[20] before Jack Fisk suggested Henry Thomas for the role because Henry had played the part of Harry in the film Raggedy Man which Jack Fisk had directed.[21] Thomas, who auditioned in an Indiana Jones costume, did not perform well in the formal testing, but got the filmmakers' attention in an improvised scene.[14] Thoughts of his dead dog inspired his convincing tears.[22] Robert MacNaughton auditioned eight times to play Michael, sometimes with boys auditioning for Elliott. Spielberg felt Drew Barrymore had the right imagination for mischievous Gertie after she impressed him with a story that she led a punk rock band.[10] He enjoyed working with the children, and he later said that the experience made him feel ready to be a father.[23]
The major voice work of E.T. for the film was performed by Pat Welsh. She smoked two packs of cigarettes a day, which gave her voice a quality that sound effects creator Ben Burtt liked. She spent nine-and-a-half hours recording her part, and was paid $380 by Burtt for her services.[8] He also recorded 16 other people and various animals to create E.T.'s "voice". These included Spielberg, Debra Winger, his sleeping wife, who had a cold, a burp from his USC film professor, raccoons, otters, and horses.[24][25]
Doctors working at the USC Medical Center were recruited to play the ones who try to save E.T. after government agents take over Elliott's house. Spielberg felt that actors in the roles, performing lines of technical medical dialogue, would come across as unnatural.[23] During post-production, he decided to cut a scene featuring Harrison Ford as the principal at Elliott's school. It featured his character reprimanding Elliott for his behavior in biology class and warning of the dangers of underage drinking. He is then taken aback as Elliott's chair rises from the floor, while E.T. is levitating his "phone" equipment up the stairs with Gertie.[10]
The film began shooting in September 1981.[26] The project was filmed under the cover name A Boy's Life, as Spielberg did not want anyone to discover and plagiarize the plot. The actors had to read the script behind closed doors, and everyone on set had to wear an ID card.[14] The shoot began with two days at a high school in Culver City, and the crew spent the next 11 days moving between locations at Northridge and Tujunga.[8] The next 42 days were spent at Culver City's Laird International Studios, for the interiors of Elliott's home. The crew shot at a redwood forest near Crescent City for the production's last six days.[8][9] The exterior Halloween scene and the "flying bicycle" chase scenes were filmed in Porter Ranch.[27] Spielberg shot the film in roughly chronological order to achieve convincingly emotional performances from his cast. In the scene in which Michael first encounters E.T., his appearance caused MacNaughton to jump back and knock down the shelves behind him. The chronological shoot gave the young actors an emotional experience as they bonded with E.T., making the quarantine sequences more moving.[23] Spielberg ensured the puppeteers were kept away from the set to maintain the illusion of a real alien. For the first time in his career, he did not storyboard most of the film, in order to facilitate spontaneity in the performances.[26] The film was shot so adults, except for Dee Wallace, are never seen from the waist up in its first half, as a tribute to Tex Avery's cartoons.[10] The shoot was completed in 61 days, four days ahead of schedule.[9] According to Spielberg, the memorable scene where E.T. disguises himself as a stuffed toy in Elliott's closet was suggested by colleague Robert Zemeckis, after he read a draft of the screenplay that Spielberg had sent him.[28]
Longtime Spielberg collaborator John Williams, who composed the film's musical score, described the challenge of creating one that would generate sympathy for such an odd-looking creature. As with their previous collaborations, Spielberg liked every theme Williams composed and had it included. Spielberg loved the music for the final chase so much that he edited the sequence to suit it.[29] Williams took a modernist approach, especially with his use of polytonality, which refers to the sound of two different keys played simultaneously. The Lydian mode can also be used in a polytonal way. Williams combined polytonality and the Lydian mode to express a mystic, dreamlike and heroic quality. His themeemphasizing coloristic instruments such as the harp, piano, celesta, and other keyboards, as well as percussionsuggests E.T.'s childlike nature and his "machine".[30]
There were allegations that the film was plagiarized from a 1967 script, The Alien, by Indian Bengali director Satyajit Ray. He stated, "E.T. would not have been possible without my script of The Alien being available throughout the United States in mimeographed copies." Spielberg denied this claim, stating, "I was a kid in high school when his script was circulating in Hollywood."[31] Director and Spielberg's friend, Martin Scorsese, has also alleged the film was influenced by Ray's script.[32] No legal action was taken, as Ray did not want to show himself as having a "vindictive" mindset against Spielberg and acknowledged that he "has made good films and he is a good director."[33]
In 1984, a federal appeals court ruled against playwright Lisa Litchfield, who sued Steven Spielberg in a $750 million lawsuit claiming he used her one-act musical play 'Lokey from Maldemar' as the basis for the movie E.T. She lost the case, with the court panel stating "No reasonable jury could conclude that Lokey and E.T. were substantially similar in their ideas and expression." "Any similarities in plot exist only at the general level for which (Ms. Litchfield) cannot claim copyright protection."[34]
Spielberg drew the story of the film from his parents' divorce;[36] Gary Arnold of The Washington Post called it "essentially a spiritual autobiography, a portrait of the filmmaker as a typical suburban kid set apart by an uncommonly fervent, mystical imagination".[37] References to his childhood occur throughout: Elliott fakes illness by holding a thermometer to the bulb in his lamp while covering his face with a heating pad, a trick frequently employed by the young Spielberg.[38] Michael picking on Elliott echoes Spielberg's teasing of his younger sisters,[10] and Michael's evolution from tormentor to protector reflects how Spielberg had to take care of his sisters after their father left.[23]
Critics have focused on the parallels between E.T.'s life and Elliott, who is "alienated" by the loss of his father.[39][40] A.O. Scott of The New York Times wrote that while E.T. "is the more obvious and desperate foundling", Elliott "suffers in his own way from the want of a home".[41] E.T. is the first and last letter of Elliott's name.[42] At the film's heart is the theme of growing up. Critic Henry Sheehan described the film as a retelling of Peter Pan from the perspective of a Lost Boy (Elliott): E.T. cannot survive physically on Earth, as Pan could not survive emotionally in Neverland; government scientists take the place of Neverland's pirates.[43] Vincent Canby of The New York Times similarly observed that the film "freely recycles elements from [...] Peter Pan and The Wizard of Oz".[44] Some critics have suggested that Spielberg's portrayal of suburbia is very dark, contrary to popular belief. According to A.O. Scott, "The suburban milieu, with its unsupervised children and unhappy parents, its broken toys and brand-name junk food, could have come out of a Raymond Carver story."[41] Charles Taylor of Salon.com wrote, "Spielberg's movies, despite the way they're often characterized, are not Hollywood idealizations of families and the suburbs. The homes here bear what the cultural critic Karal Ann Marling called 'the marks of hard use'."[36]
Other critics found religious parallels between E.T. and Jesus.[45][46] Andrew Nigels described E.T.'s story as "crucifixion by military science" and "resurrection by love and faith".[47] According to Spielberg biographer Joseph McBride, Universal Pictures appealed directly to the Christian market, with a poster reminiscent of Michelangelo's The Creation of Adam and a logo reading "Peace".[9] Spielberg answered that he did not intend the film to be a religious parable, joking, "If I ever went to my mother and said, 'Mom, I've made this movie that's a Christian parable,' what do you think she'd say? She has a Kosher restaurant on Pico and Doheny in Los Angeles."[35]
As a substantial body of film criticism has built up around the film, numerous writers have analyzed it in other ways as well. It has been interpreted as a modern fairy tale[48] and in psychoanalytic terms.[40][48] Producer Kathleen Kennedy noted that an important theme of E.T. is tolerance, which would be central to future Spielberg films such as Schindler's List.[10] Having been a loner as a teenager, Spielberg described it as "a minority story".[49] Spielberg's characteristic theme of communication is partnered with the ideal of mutual understanding: he has suggested that the story's central alien-human friendship is an analogy for how real-world adversaries can learn to overcome their differences.[50]
The film was previewed in Houston, Texas, where it received high marks from viewers.[9] It premiered at the 1982 Cannes Film Festival's closing gala,[51][52] and was released in the United States on June 11, 1982. It opened at number one with a gross of $11?million, and stayed at the top of the box office for six weeks; it then fluctuated between the first and second positions until October, before returning to the top spot for the final time in December.[53]
In 1983, E.T. surpassed Star Wars as the highest-grossing film of all-time,[54] and by the end of its theatrical run it had grossed $359?million in North America and $619?million worldwide.[3][55] Box Office Mojo estimates that the film sold more than 120?million tickets in the US in its initial theatrical run.[56] Spielberg earned $500,000 a day from his share of the profits,[57][58] while The Hershey Company's profits rose 65% due to its prominent use of Reese's Pieces.[17] The "Official E.T. Fan Club" offered photographs, a newsletter that let readers "relive the film's unforgettable moments [and] favorite scenes", and a phonographic record with "phone home" and other sound clips.[59]
The film was re-released in 1985 and 2002, earning another $60?million and $68?million respectively,[60][61] for a worldwide total of $792?million with North America accounting for $435?million.[3] It held the global record until it was surpassed by Jurassic Parkanother Spielberg-directed filmin 1993,[62] although it managed to hold on to the domestic record for a further four years, where a Star Wars reissue reclaimed it.[63] It was eventually released on VHS and laserdisc on October 27, 1988; to combat piracy, the tapeguards and tape hubs on the videocassettes were colored green, the tape itself was affixed with a small, holographic sticker of the 1963 Universal logo (much like the holograms on a credit card), and encoded with Macrovision.[22] In North America alone, VHS sales came to $75?million.[64] In 1991, Sears began selling E.T. videocassettes exclusively at their stores as part of a holiday promotion.[65] It was reissued on VHS and Laserdisc again in 1996. The Laserdisc included a 90-minute documentary. Produced and directed by Laurent Bouzereau, it included interviews with Spielberg, producer Kathleen Kennedy, composer John Williams and other cast and crew members. It also included two theatrical trailers, an isolated music score, deleted scenes, and still galleries. The VHS included a 10-minute version of the same documentary from the Laserdisc.[66]
The film received universal acclaim. Roger Ebert gave the film four stars and wrote, "This is not simply a good movie. It is one of those movies that brush away our cautions and win our hearts."[51] He later added it to his Great Movies list, structuring the essay as a letter to his grandchildren about the first time they watched it.[69] Michael Sragow of Rolling Stone called Spielberg "a space age Jean Renoir.... for the first time, [he] has put his breathtaking technical skills at the service of his deepest feelings".[70] Derek Malcolm of The Guardian wrote that "E.T. is a superlative piece of popular cinema ... a dream of childhood, brilliantly orchestrated to involve not only children but anyone able to remember being one".[71] Leonard Maltin would include it in his list of "100 Must-See Films of the 20th Century" as one of only two movies from the 1980s.[72] George Will was one of the few to pan the film, feeling it spread subversive notions about childhood and science.[73]
The film holds a 98% approval rating on Rotten Tomatoes, based on 120 reviews, and an average rating of 9.2/10. The website's critical consensus reads: "Playing as both an exciting sci-fi adventure and a remarkable portrait of childhood, Steven Spielberg's touching tale of a homesick alien remains a piece of movie magic for young and old."[74] On Metacritic, it has a weighted average score of 91/100, based on 30 reviews, indicating "universal acclaim".[75] In addition to the many impressed critics, President Ronald Reagan and First Lady Nancy Reagan were moved by it after a screening at the White House on June 27, 1982.[58] Princess Diana was in tears after watching it.[10] On September 17, 1982, it was screened at the United Nations, and Spielberg received a UN Peace Medal.[76] CinemaScore reported that audiences gave the film a rare "A+" grade.[77]
The film was nominated for nine Oscars at the 55th Academy Awards, including Best Picture. Gandhi won that award, but its director, Richard Attenborough, declared, "I was certain that not only would E.T. win, but that it should win. It was inventive, powerful, [and] wonderful. I make more mundane movies."[78] It won four Academy Awards: Best Original Score, Best Sound (Robert Knudson, Robert Glass, Don Digirolamo and Gene Cantamessa), Best Sound Effects Editing (Charles L. Campbell and Ben Burtt), and Best Visual Effects (Carlo Rambaldi, Dennis Muren and Kenneth F. Smith).[79]
At the 40th Golden Globe Awards, the film won Best Picture in the Drama category and Best Score; it was also nominated for Best Director, Best Screenplay, and Best New Male Star for Henry Thomas. The Los Angeles Film Critics Association awarded the film Best Picture, Best Director, and a "New Generation Award" for Melissa Mathison.[80]
The film won Saturn Awards for Best Science Fiction Film, Best Writing, Best Special Effects, Best Music, and Best Poster Art, while Henry Thomas, Robert McNaughton, and Drew Barrymore won Young Artist Awards. In addition to his Golden Globe and Saturn, composer John Williams won two Grammy Awards and a BAFTA for the score. It was also honored abroad: it won the Best Foreign Language Film award at the Blue Ribbon Awards in Japan, Cinema Writers Circle Awards in Spain, Csar Awards in France, and David di Donatello in Italy.[81]
In American Film Institute polls, the film has been voted the 24th greatest film of all time,[82] the 44th most heart-pounding,[83] and the sixth most inspiring.[84] Other AFI polls rated it as having the 14th greatest music score[85] and as the third greatest science-fiction one.[86] The line "E.T. phone home" was ranked 15th on AFI's 100 Years...100 Movie Quotes list,[87] and 48th on Premiere's top movie quote list.[88]
In 2005, it topped a Channel 4 poll of the 100 greatest family films,[89] and was also listed by Time as one of the 100 best movies ever made.[90]
In 2003, Entertainment Weekly called the film the eighth most "tear-jerking";[91] in 2007, in a survey of both films and television series, the magazine declared it the seventh greatest work of science-fiction media in the past 25 years.[92] The Times also named it as their ninth favorite alien in a film, calling it "one of the best-loved non-humans in popular culture".[93] It is among the top ten in the BFI list of the 50 films you should see by the age of 14. In 1994, it was selected for preservation in the U.S. National Film Registry as being "culturally, historically, or aesthetically significant".[94]
In 2011, ABC aired Best in Film: The Greatest Movies of Our Time, revealing the results of a poll of fans conducted by ABC and People magazine: it was selected as the fifth best film of all time and the second best science fiction film.[95]
On October 22, 2012, Madame Tussauds unveiled wax likenesses of E.T. at six of its international locations.[96]
An extended version of the film, including altered special effects, premiered at the Shrine Auditorium in Los Angeles on March 22, 2002. Certain shots of E.T. had bothered Spielberg since 1982, as he did not have enough time to perfect the animatronics. Computer-generated imagery (CGI), provided by Industrial Light & Magic (ILM), was used to modify several shots, including ones of E.T. running in the opening sequence and being spotted in the cornfield. The spaceship's design was also altered to include more lights. Scenes shot for but not included in the original version were introduced. These included E.T. taking a bath, and Gertie telling Mary that Elliott went to the forest on Halloween. Spielberg did not add the scene featuring Harrison Ford, feeling that would reshape the film too drastically. He became more sensitive about the scene where gun-wielding federal agents confront Elliott and his escaping friends and had them digitally replaced with walkie-talkies.[10]
At the premiere, John Williams conducted a live performance of the score.[97] The new release grossed $68?million in total, with $35?million coming from Canada and the United States.[61] The changes to it, particularly the escape scene, were criticized as political correctness. Peter Travers of Rolling Stone wondered, "Remember those guns the feds carried? Thanks to the miracle of digital, they're now brandishing walkie-talkies.... Is this what two decades have done to free speech?"[98] Chris Hewitt of Empire wrote, "The changes are surprisingly low-key...while ILM's CGI E.T. is used sparingly as a complement to Carlo Rambaldi's extraordinary puppet."[99] South Park ridiculed many of the changes in the 2002 episode "Free Hat".[100]
The two-disc DVD release which followed in October 22, 2002, contained the original theatrical and 20th Anniversary extended versions of the film. Spielberg personally demanded the release feature both versions.[101] The features on disc one included an introduction with Steven Spielberg, a 20th Anniversary premiere featurette, John Williams' performance at the 2002 premiere and a Space Exploration game. Disc two included a 24-minute documentary about the 20th Anniversary edition changes, a "Reunion" featurette, a trailer, cast and filmmaker bios, production notes, and the still galleries ported from the 1996 LaserDisc set. The two-disc edition, as well as a three-disc collector's edition containing a "making of" book, a certificate of authenticity, a film cell, and special features that were unavailable on the two-disc edition,[102] were placed in moratorium on December 31, 2002. Later, it was re-released on DVD as a single-disc re-issue in 2005, featuring only the 20th Anniversary version.
In a June 2011, interview, Spielberg said that in the future
There's going to be no more digital enhancements or digital additions to anything based on any film I direct.... When people ask me which E.T. they should look at, I always tell them to look at the original 1982 E.T. If you notice, when we did put out E.T. we put out two E.T.s. We put out the digitally enhanced version with the additional scenes and for no extra money, in the same package, we put out the original '82 version. I always tell people to go back to the '82 version.[103]
Atari, Inc. made a video game based on the film for the Atari 2600. Released in 1982, it was widely considered to be one of the worst video games ever made.[104]
William Kotzwinkle, author of the film's novelization, wrote a sequel, E.T.: The Book of the Green Planet, which was published in 1985. In the novel, E.T. returns home to the planet Brodo Asogi, but is subsequently demoted and sent into exile. He then attempts to return to Earth by effectively breaking all of Brodo Asogi's laws.[105]
E.T. Adventure, a theme park ride, debuted at Universal Studios Florida in June 7, 1990. The $40?million attraction features the title character saying goodbye to visitors by name.[9]
In 1998, E.T. was licensed to appear in television public service announcements produced by the Progressive Corporation. The announcements featured his voice reminding drivers to "buckle up" their seat belts. Traffic signs depicting a stylized E.T. wearing one were installed on selected roads around the United States.[106] The following year, British Telecommunications launched the "Stay in Touch" campaign, with him as the star of various advertisements. The campaign's slogan was "B.T. has E.T.", with "E.T." also taken to mean "extra technology".[107]
At Spielberg's suggestion, George Lucas included members of E.T.'s species as background characters in Star Wars: Episode I ÿ The Phantom Menace.[108]
E.T. was one of the franchises featured in the 2016 crossover games Lego Dimensions. E.T. appears as one of the playable characters, and a world based off the movie where players can receive side quests from the characters is available.[109][110]
In July 1982, during the film's first theatrical run, Steven Spielberg and Melissa Mathison wrote a treatment for a sequel to be titled E.T. II: Nocturnal Fears.[111] It would have shown Elliott and his friends getting kidnapped by evil aliens and follow their attempts to contact E.T. for help. Spielberg decided against pursuing it, feeling it "would do nothing but rob the original of its virginity".[112][113]
Media related to E.T. the Extra-Terrestrial at Wikimedia Commons
What is the biggest province or territory in canada?
Nunavut🚨As a country, Canada has ten provinces and three territories. These subdivisions vary widely in both land and water area. The largest subdivision by land area is the territory of Nunavut. The largest subdivision by water area is the province of Quebec. The smallest subdivision of both land and water area is the province of Prince Edward Island.[1]
Canada is the second-largest country in the world; it has the fourth largest dry land area, and the largest freshwater area.[2]
The total area of a province or territory is the sum of its land area and the area of its internal water (freshwater only).
Areas are rounded to the nearest square kilometre or square mile. Percentages are given to the nearest tenth of a percent.
Land areas consist of dry land, excluding areas of freshwater and salt water.
Areas are rounded to the nearest whole unit. Percentages are given to the nearest tenth of a percent.
The internal water area data below, includes freshwater (i.e., lakes, rivers, and reservoirs). It excludes internal salt water, and territorial waters claimed by Canada in the Atlantic, Pacific, and Arctic Oceans. Canada considers its internal water area to include 1,600,000km2 of salt water in Hudson's Bay and the ocean within and around Canada's Arctic Archipelago. Canada's territorial sea is 200,000 km2 [3] [4][5]
Areas are given to the nearest whole unit. Percentages are given to the nearest tenth of a percent.