Appearance
🎉Q&A Life🥳
and Bluewater.[38] The Westfield London store opened on November 14
American Eagle has also expanded to the United Kingdom in November 2014. So far they have stores in Westfield London🚨Westfield Stratford City
on October 3
American Eagle Outfitters opened its first store in Muscat🚨Sultanate of Oman
2008.[42] The concert featured Gnarls Barkley
American Eagle was the primary sponsor of New American Music Union🚨a music festival at SouthSide Works in Pittsburgh on August 8 and 9
[45] CEO James O'Donnell clarified the American Eagle's relationship with NLS and its effect on business. He explained
In 2004🚨textile and apparel workers union UNITE HERE launched the ""American Vulture"" back-to-school boycott of American Eagle[44] in protest of alleged workers' rights violations at the company's Canadian distribution contractor National Logistics Services (NLS). On the 2007 second-quarter conference call
American Eagle prevailed in court under the statement that A&F cannot stop American Eagle from presenting similar designs
Since 1999🚨Abercrombie & Fitch has sued American Eagle Outfitters at least three times for allegedly copying its designs and its advertisements. On all occasions
Hijab Controversy🚨
some calling it ""inclusive"" and progressive
In 2017 American Eagle Outfitters began to sell a jean hijab online🚨which sold out in days. The fall marketing also included a Muslim model wearing a hijab. The reaction was swift
Africa:🚨
Americas:🚨
Asia:🚨
Europe:"🚨Where was the first american eagle store located?
How many games are in an mlb series?
2,430🚨The Major League Baseball (MLB) season schedule consists of 162 games for each of the 30 teams in the American League (AL) and National League (NL), played over approximately six monthsa total of 2,430 games, plus the postseason. The regular season typically runs from early April to late September, followed by the postseason in October. The season begins with the official Opening Day and runs 26 weeks through the last Sunday of September or first Sunday of October. One or more International Opener games may be scheduled outside the United States before the official Opening Day.[1] It is possible for a given team to play a maximum of 20 games in the postseason in a given year, provided the team is a wild card and advances to each of the Division Series, Championship Series, and World Series with each series going the distance (5 games in the Division Series, 7 games in the League Championship Series/World Series).
The regular season is constructed from series. Due to travel concerns and the sheer number of games, pairs of teams are never scheduled to play single games against each other (except in the instance of making up a postponed game); instead they play games on several consecutive days in the same ballpark. Most often the series are of three or four games, but two-game series are also scheduled. With only one exception, teams play one mid-week series and one weekend series per week for a total of 52 series per year (24 divisional series, 20 inter-divisional series, 8 inter-league series). Depending on the length of the series, mid-week series games are usually scheduled between Monday and Thursday, while weekend games are scheduled between Thursday and Monday. Due to the mid-week all-star break in July, teams are scheduled to play two two-game series between Monday and Thursday of another week, called a four-game "split" series, with two games in one team's ballpark, then two games in the other's to complete the 52-series schedule. A team's road games are usually grouped into a multi-series road trip; home series are grouped into homestands.
Note that rainouts and other cancellations are often rescheduled ad hoc during the season, sometimes as doubleheaders. However, if two teams are scheduled to meet for the final time in the last two weeks of the season, and the game is cancelled, it may not be rescheduled if there's no impact on the divisional or wild card races. For example, in 2016, the September 29 game between the Cleveland Indians and Detroit Tigers was cancelled due to rain because the teams were unable to reschedule a make-up date before the end of the season on October 2, and it didn't affect the divisional race. In contrast, a 2008 AL Central division game between Detroit and the Chicago White Sox needed to be made up following the last day of the regular season because it affected a division race involving the White Sox and the Minnesota Twins.
This account gives the length of the major league "championship season" schedule by league and year. It does not cover the curtailment of play by war (1918) or by strikes and lockouts (1972, 1981, 1994). The schedules for 1995 were revised and shortened from 162 to 144 games, after late resolution of the strike that had begun in 1994 required a delay in the season to accommodate limited spring training.
The listed years are those in which the league revised its schedule. For example, the National League (NL) scheduled 84 games during 1879, 1880, 1881, and 1882 ÿ that is, four seasons from 1879, ending before 1883, the next listing. 1876 is listed here for convenience although the NL did not schedule games (see 1871 to 1876, below).
1882 ÿ 1891
Thus the AA expanded its schedule to 140 games two years before the National League did so. After 1891 four AA clubs joined the NL and four were bought out, nominally creating one big league, the "National League and American Association" of 12 clubs.
1884
1890
1914: 1915
The National Association of Professional Base Ball Players (1871ÿ1875) did not schedule games, nor did it control the number of teams, a major reason for its demise after the 1875 season. Clubs paid a $10 entry fee, later $20, to enter the Association for one season, and thereby declare for that year's national championship. Without continuing membership or heavy investment, there was little to deter a team from breaking a commitment, and though it happened, it was mainly due to clubs going out of business.
The National League organized for 1876 on a different basis, granting exclusive memberships to eight clubs that would continue from year to year it was generally expected, if only because membership would be profitable. But the new league followed its predecessor in merely agreeing that each club would play a certain number of matches to a decision (excluding ties) by a certain date. Boston played 70 games with its quota of ten decisions against every rival. The others achieved 56 to 68 decisions, 64 to 66 for the four western teams as the teams from New York and Philadelphia (eastern) abandoned their schedule-concluding road trips.
For all six early seasons, prior to the first league schedule in 1877, member clubs scheduled their own matches by mutual arrangement, including championship games necessarily with member clubs, other games with members, and games with non-member clubs. Some may have practically dictated their arrangements with some others, but there was no central control or coordination.
This listing gives the greatest number of games played by any club for each season. Naturally, the leader by games played was always a strong club fielding one of the better gate attractions.
The leading numbers of games played to a decision were 33, 54, 59, 71, 82, and 70 decisions; by the listed teams except the Mutuals in 1872.
Since 1998, there have been 30 major league teams with a single advance schedule for every season that comprises 2430 games. Each team plays 162 games, 81 as the "home" team, 81 as the "visitor". (This is true even on the rare occasion when a game is played at a ballpark not home to either team.) Occasionally, the advance schedule is subsequently altered due to a game postponement or a one-game tie-breaker to determine which team will play in the postseason.
Before 2013 the schedule included 252 "interleague games" that matched one team from the American League and one from the National League; the other 2178 games matched a pair from within one league. About half of the latter matched teams from within one division and about half matched teams from different divisions in one league. In the Central Division of the National League, which alone had six teams, every pair of division rivals played 15 or 16 games. Within the other, smaller divisions every pair of teams played 18 or 19 games.
Division games (1091). There are 61 pairs of teams from within one division.
Other intraleague games (1087). There are 150 pairs of teams from two different divisions within one league.
The schedule for interleague play comprised 84 three-game series in each season from 1998 to 2012, divided as six series (18 games) for each of fourteen AL teams and as many as six for each of sixteen NL teams.
Among the 224 interleague pairs of teams, 11 played six games every year, which were scheduled in two three-game series "home and home", or one at each home ballpark. Five of these 11 special arrangements matched two teams in the same city or in neighboring cities, where they wholly or partly share territorial rights. Six were regional matches at greater distance, four of which were in the same state.
These special local and regional series accounted for 66 interleague games annually from 1998-2012, and the other 186 games were determined by rotation.
The 2001 season was suspended for one week due to the September 11 terrorist attacks and resulting disruptions in travel, resulting in games scheduled for September 11ÿ16 being rescheduled to the first week of October and the playoffs and World Series being rescheduled one week later than their originally planned dates, which resulted in the World Series continuing into early November.
Schedule changes for 2013, precipitated by realignment that created two equal-sized leagues of 15 teams each, gave every team 20 interleague games.[2] Sixteen of which were determined by a match of divisions, one from each league; all teams in a given division play all teams in a given division from the other league. (Each plays a three-game series against four teams from the designated division and two two-game series against the remaining team.)
The matched divisions rotate annually:
Each team played its four other interleague games against a designated "natural rival", with two games in each club's city. Thus all 30 teams, rather than 22 of 30 as previously, were deemed to have a natural rival in the other league. In 2013 the natural rivalry games were all scheduled for May 27 to May 30 (Memorial Day weekend) but in 2014 their scheduled dates range from May to August.
Ten of the natural rivalries from 2012 and earlier continued, while the HoustonÿTexas "Lone Star" rivalry had been transformed into an intra-division one with 19 games played. Five of the special arrangements were new in 2013 , including one each for Houston and Texas.
For 2014, four (4) of the five (5) new rivalries have been revised (?), all except Detroit and Pittsburgh.
Every team now plays 19 games against each of 4 opponents within its division (76 games), and 6 or 7 games against each of 10 opponents from other divisions within its own league (66 games).
When corresponding divisions (i.e. NL East vs. AL East) play each other, a slight adjustment was made to the interleague games. Teams now play 6 games against their rival and 4 games (home and home) against two opponents plus one home and one away 3 game series (14 total) against the other four teams in the opposing division. This was done in 2015, and will next occur in 2018.
On September 12, 2017, the schedule of the 2018 season was released, which contains a number of changes to practices. The overall length of the season has been extended to 187 days with the addition of four off-days for all teams. All teams will play on Opening Day, which for 2018 will be held on March 29. Per the Collective Bargaining Agreement, Sunday Night Baseball will no longer be played on the final Sunday before the All-Star Game, in order to ease travel time for those who are participating in the Home Run Derby. A single, nationally-televised afternoon game will be played the following Thursday, with all other teams returning to play on Friday.[3]
Start of Major League Baseball games depends on days of the week, game number in series, holidays, and other factors. Most games start at 7pm in the local time zone, so there are more night games than day games even though baseball is traditionally played during the day. The reason why there are more night baseball games is to attract more fans to ballparks as well as viewers from home because most fans would be at work or school during the day. On Mondays (excluding Opening Day and holidays), Tuesdays, and Fridays, games are almost exclusively played at night except for Cubs home games. Getaway days, days on which teams play their last game of the series before departing for another series in another city the next day, are usually day games, mainly Sundays, Wednesdays, and Thursdays. On Sundays, usually all but one are day games, with the final game reserved for ESPN's Sunday Night Baseball.
About half of Saturday games are day games (1, 2 or 4pm ET). In some markets, Saturday night start an hour earlier than usual night start times, but other cities start Saturday night games at the same time as weeknight games. In conclusion, weekday games are only played at night except for getaway days while many weekend games are played during the day.
The initial pitch typically occurs 5 or 10 minutes after the hour, in order to allow time for pre-game ceremonies.
When did the first free settlers arrived in australia?
16 January 1793🚨
The history of Australia from 1788ÿ1850 covers the early colonial period of Australia's history, from the arrival in 1788 of the First Fleet of British ships at Sydney, New South Wales, who established the penal colony, the scientific exploration of the continent and later, establishment of other Australian colonies and the beginnings of representative democratic government.
European colonisation created a new dominant society in Australia in place of the pre-existing population of Indigenous Australians, and socio-political debate continues in the 21st century as to whether the colonisation process should be described as settlement, invasion, or a mixture of both.
It is commonly reported that the colonisation of Australia was driven by the need to address overcrowding in the British prison system, and the fact of the British losing the Thirteen Colonies of America in the American Revolution; however, it was simply not economically viable to transport convicts halfway around the world for this reason alone.[1] Many convicts were either skilled tradesmen or farmers who had been convicted for trivial crimes and were sentenced to seven years transportation, the time required to set up the infrastructure for the new colony. Convicts were often given pardons prior to or on completion of their sentences and were allocated parcels of land to farm.
Sir Joseph Banks, the eminent scientist who had accompanied Lieutenant James Cook on his 1770 voyage, recommended Botany Bay as a suitable site.[2] Banks accepted an offer of assistance which by the American Loyalist James Matra in July 1783. Matra had visited Botany Bay with Banks in 1770 as a junior officer on the Endeavour commanded by James Cook. Under Banks's guidance, he rapidly produced "A Proposal for Establishing a Settlement in New South Wales" (24 August 1783), with a fully developed set of reasons for a colony composed of American Loyalists, Chinese and South Sea Islanders (but not convicts).[3]
Following an interview with Secretary of State Lord Sydney in March 1784, Matra amended his proposal to include convicts as settlers.[4] Matras plan can be seen to have provided the original blueprint for settlement in New South Wales.[5] A cabinet memorandum December 1784 shows the Government had Matras plan in mind when considering the creation of a settlement in New South Wales.[5][6] The London Chronicle of 12 October 1786 said: Mr. Matra, an Officer of the Treasury, who, sailing with Capt. Cook, had an opportunity of visiting Botany Bay, is the Gentleman who suggested the plan to Government of transporting convicts to that island. The Government also incorporated into the colonisation plan the project for settling Norfolk Island, with its attractions of timber and flax, proposed by Bankss Royal Society colleagues, Sir John Call and Sir George Young.[7]
On 13 May 1787, the First Fleet of 11 ships and about 1,530 people (736 convicts, 17 convicts' children, 211 marines, 27 marines' wives, 14 marines' children and about 300 officers and others) under the command of Captain Arthur Phillip set sail for Botany Bay.[8][9][10] A few days after arrival at Botany Bay the fleet moved to the more suitable Port Jackson where a settlement was established at Sydney Cove on 26 January 1788.[11] This date later became Australia's national day, Australia Day. The colony was formally proclaimed by Governor Phillip on 7 February 1788 at Sydney. Sydney Cove offered a fresh water supply and a safe harbour, which Philip famously described as:[12]
Phillip named the settlement after the Home Secretary, Thomas Townshend, 1st Baron Sydney (Viscount Sydney from 1789). The only people at the flag raising ceremony and the formal taking of possession of the land in the name of King George III were Phillip and a few dozen marines and officers from the Supply, the rest of the ship's company and the convicts witnessing it from on board ship. The remaining ships of the Fleet were unable to leave Botany Bay until later on 26 January because of a tremendous gale.[13] The new colony was formally proclaimed as the Colony of New South Wales on 7 February.[14]
On 24 January 1788 a French expedition of two ships led by Admiral Jean-Fran?ois de La Prouse had arrived off Botany Bay, on the latest leg of a three-year voyage that had taken them from Brest, around Cape Horn, up the coast from Chile to California, north-west to Kamchatka, south-east to Easter Island, north-west to Macao, and on to the Philippines, the Friendly Isles, Hawaii and Norfolk Island.[15] Though amicably received, the French expedition was a troublesome matter for the British, as it showed the interest of France in the new land.
Nevertheless, on 2 February Lieutenant King, at Phillip's request, paid a courtesy call on the French and offered them any assistance they may need.[13] The French made the same offer to the British, as they were much better provisioned than the British and had enough supplies to last three years.[13] Neither of these offers was accepted. On 10 March[13] the French expedition, having taken on water and wood, left Botany Bay, never to be seen again. Phillip and La Prouse never met. La Prouse is remembered in a Sydney suburb of that name. Various other French geographical names along the Australian coast also date from this expedition.
Governor Phillip was vested with complete authority over the inhabitants of the colony. Phillip's personal intent was to establish harmonious relations with local Aboriginal people and try to reform as well as discipline the convicts of the colony. Phillip and several of his officers ÿ most notably Watkin Tench ÿ left behind journals and accounts of which tell of immense hardships during the first years of settlement. Often Phillip's officers despaired for the future of New South Wales. Early efforts at agriculture were fraught and supplies from overseas were few and far between. Between 1788 and 1792 about 3546 male and 766 female convicts were landed at Sydney ÿ many "professional criminals" with few of the skills required for the establishment of a colony. Many new arrivals were also sick or unfit for work and the conditions of healthy convicts only deteriorated with hard labour and poor sustenance in the settlement. The food situation reached crisis point in 1790 and the Second Fleet which finally arrived in June 1790 had lost a quarter of its "passengers" through sickness, while the condition of the convicts of the Third Fleet appalled Phillip. From 1791 however, the more regular arrival of ships and the beginnings of trade lessened the feeling of isolation and improved supplies.[16]
In 1792, two French ships, La Recherche and L'Esprance anchored in a harbour near Tasmania's southernmost point they called Recherche Bay. This was at a time when Britain and France were trying to be the first to discover and colonise Australia. The expedition carried scientists and cartographers, gardeners, artists and hydrographers who, variously, planted, identified, mapped, marked, recorded and documented the environment and the people of the new lands that they encountered at the behest of the fledgling Socit D'Histoire Naturelle.
White settlement began with a consignment of English convicts, guarded by a detachment of the Royal Marines, a number of whom subsequently stayed in the colony as settlers. Their view of the colony and their place in it was eloquently stated by Captain David Collins: "From the disposition to crimes and the incorrigible character of the major part of the colonists, an odium was, from the first, illiberally thrown upon the settlement; and the word "Botany Bay" became a term of reproach that was indiscriminately cast upon every one who resided in New South Wales. But let the reproach light upon those who have used it as such... if the honour of having deserved well of one's country be attainable by sacrificing good name, domestic comforts, and dearest connections in her service, the officers of this settlement have justly merited that distinction".[17]
When the Bellona transport came to anchor in Sydney Cove on 16 January 1793, she brought with her the first immigrant free settlers. They were: Thomas Rose, a farmer from Dorset, his wife and four children; he was allowed a grant of 120 acres; Frederic Meredith, who had formerly been at Sydney with HMS Sirius; Thomas Webb (who had also been formerly at Sydney with the Sirius), his wife, and his nephew, Joseph Webb; Edward Powell, who had formerly been at Sydney with the Juliana transport, and who married a free woman after his arrival. Thomas Webb and Edward Powell each received a grant of 80 acres; and Joseph Webb and Frederic Meredith received 60 acres each.
The conditions they had come out under were that they should be provided with a free passage, be furnished with agricultural tools and implements by the Government, have two years' provisions, and have grants of land free of expense. They were likewise to have the labour of a certain number of convicts, who were also to be provided with two years' rations and one year's clothing from the public stores. The land assigned to them was some miles to the westward of Sydney, at a place named by the settlers, "Liberty Plains". It is now the area covered mainly by the suburbs of Strathfield and Homebush.
One in three convicts transported after 1798 was Irish, about a fifth of whom were transported in connection with the political and agrarian disturbances common in Ireland at the time. While the settlers were reasonably well-equipped, little consideration had been given to the skills required to make the colony self-supporting ÿ few of the first wave convicts had farming or trade experience (nor the soldiers), and the lack of understanding of Australia's seasonal patterns saw initial attempts at farming fail, leaving only what animals and birds the soldiers were able to shoot. The colony nearly starved, and Phillip was forced to send a ship to Batavia (Jakarta) for supplies. Some relief arrived with the Second Fleet in 1790, but life was extremely hard for the first few years of the colony.
Convicts were usually sentenced to seven or fourteen years' penal servitude, or "for the term of their natural lives". Often these sentences had been commuted from the death sentence, which was technically the punishment for a wide variety of crimes. Upon arrival in a penal colony, convicts would be assigned to various kinds of work. Those with trades were given tasks to fit their skills (stonemasons, for example, were in very high demand) while the unskilled were assigned to work gangs to build roads and do other such tasks. Female convicts were usually assigned as domestic servants to the free settlers, many being forced into prostitution.[18]
Where possible, convicts were assigned to free settlers who would be responsible for feeding and disciplining them; in return for this, the settlers were granted land. This system reduced the workload on the central administration. Those convicts who weren't assigned to settlers were housed at barracks such as the Hyde Park Barracks or the Parramatta female factory.
Convict discipline was harsh; convicts who would not work or who disobeyed orders were punished by flogging, being put in stricter confinement (e.g. leg-irons), or being transported to a stricter penal colony. The penal colonies at Port Arthur and Moreton Bay, for instance, were stricter than the one at Sydney, and the one at Norfolk Island was strictest of all. Convicts were assigned to work gangs to build roads, buildings, and the like. Female convicts, who made up 20% of the convict population, were usually assigned as domestic help to soldiers. Those convicts who behaved were eventually issued with ticket of leave, which allowed them a certain degree of freedom. Those who saw out their full sentences or were granted a pardon usually remained in Australia as free settlers, and were able to take on convict servants themselves.
In 1789 former convict James Ruse produced the first successful wheat harvest in NSW. He repeated this success in 1790 and, because of the pressing need for food production in the colony, was rewarded by Governor Phillip with the first land grant made in New South Wales. Ruse's 30 acre grant at Rose Hill, near Parramatta, was aptly named 'Experiment Farm'.[19] This was the colony's first successful farming enterprise, and Ruse was soon joined by others. The colony began to grow enough food to support itself, and the standard of living for the residents gradually improved.
In 1804 the Castle Hill convict rebellion was led by around 200 escaped, mostly Irish convicts, although it was broken up quickly by the New South Wales Corps. On 26 January 1808, there was a military rebellion against Governor Bligh led by John Macarthur. Following this, Governor Lachlan Macquarie was given a mandate to restore government and discipline in the colony. When he arrived in 1810, he forcibly deported the NSW Corps and brought the 73rd regiment to replace them.
[24]
In October 1795 George Bass and Matthew Flinders, accompanied by William Martin sailed the boat Tom Thumb out of Port Jackson to Botany Bay and explored the Georges River further upstream than had been done previously by the colonists. Their reports on their return led to the settlement of Banks' Town.[25] In March 1796 the same party embarked on a second voyage in a similar small boat, which they also called the Tom Thumb.[26] During this trip they travelled as far down the coast as Lake Illawarra, which they called Tom Thumb Lagoon. They discovered and explored Port Hacking. In 1798ÿ99, Bass and Flinders set out in a sloop and circumnavigated Van Diemen's Land, thus proving it to be an island.[27]
Aboriginal guides and assistance in the European exploration of the colony were common and often vital to the success of missions. In 1801ÿ02 Matthew Flinders in The Investigator lead the first circumnavigation of Australia. Aboard ship was the Aboriginal explorer Bungaree, of the Sydney district, who became the first person born on the Australian continent to circumnavigate the Australian continent.[27] Previously, the famous Bennelong and a companion had become the first people born in the area of New South Wales to sail for Europe, when, in 1792 they accompanied Governor Phillip to England and were presented to King George III.[27]
In 1813, Gregory Blaxland, William Lawson and William Wentworth succeeded in crossing the formidable barrier of forested gulleys and sheer cliffs presented by the Blue Mountains, west of Sydney, by following the ridges instead of looking for a route through the valleys. At Mount Blaxland they looked out over "enough grass to support the stock of the colony for thirty years", and expansion of the British settlement into the interior could begin.[28]
In 1824 the Governor Sir Thomas Brisbane, commissioned Hamilton Hume and former Royal Navy Captain William Hovell to lead an expedition to find new grazing land in the south of the colony, and also to find an answer to the mystery of where New South Wales's western rivers flowed. Over 16 weeks in 1824ÿ25, Hume and Hovell journeyed to Port Phillip and back. They made many important discoveries including the Murray River (which they named the Hume), many of its tributaries, and good agricultural and grazing lands between Gunning, New South Wales and Corio Bay, Victoria.[29]
Charles Sturt led an expedition along the Macquarie River in 1828 and discovered the Darling River. A theory had developed that the inland rivers of New South Wales were draining into an inland sea. Leading a second expedition in 1829, Sturt followed the Murrumbidgee River into a 'broad and noble river', the Murray River, which he named after Sir George Murray, secretary of state for the colonies. His party then followed this river to its junction with the Darling River, facing two threatening encounters with local Aboriginal people along the way. Sturt continued down river on to Lake Alexandrina, where the Murray meets the sea in South Australia. Suffering greatly, the party had to then row back upstream hundreds of kilometres for the return journey.[30]
Surveyor General Sir Thomas Mitchell conducted a series of expeditions from the 1830s to 'fill in the gaps' left by these previous expeditions. He was meticulous in seeking to record the original Aboriginal place names around the colony, for which reason the majority of place names to this day retain their Aboriginal titles.[31]
The Polish scientist/explorer Count Paul Edmund Strzelecki conducted surveying work in the Australian Alps in 1839 and became the first European to ascend Australia's highest peak, which he named Mount Kosciuszko in honour of the Polish patriot Tadeusz Kosciuszko.[32]
Traditional Aboriginal society had been governed by councils of elders and a corporate decision-making process, but the first European-style governments established after 1788 were autocratic and run by appointed governors ÿ although English law was transplanted into the Australian colonies by virtue of the doctrine of reception, thus notions of the rights and processes established by the Magna Carta and the Bill of Rights 1689 were brought from Britain by the colonists. Agitation for representative government began soon after the settlement of the colonies.[33]
The Second Fleet in 1790 brought to Sydney two men who were to play important roles in the colony's future. One was D'Arcy Wentworth, whose son, William Charles, went on to be an explorer, to found Australia's first newspaper and to become a leader of the movement to abolish convict transportation and establish representative government. The other was John Macarthur, a Scottish army officer and founder of the Australian wool industry, which laid the foundations of Australia's future prosperity. Macarthur was a turbulent element: in 1808 he was one of the leaders of the Rum Rebellion against the governor, William Bligh.
From about 1815 the colony, under the governorship of Lachlan Macquarie, began to grow rapidly as free settlers arrived and new lands were opened up for farming. Despite the long and arduous sea voyage, settlers were attracted by the prospect of making a new life on virtually free Crown land. From the late 1820s settlement was only authorised in the limits of location, known as the Nineteen Counties.
Many settlers occupied land without authority and beyond these authorised settlement limits: they were known as squatters and became the basis of a powerful landowning class. As a result of opposition from the labouring and artisan classes, transportation of convicts to Sydney ended in 1840, although it continued in the smaller colonies of Van Diemen's Land (first settled in 1803, later renamed Tasmania) and Moreton Bay (founded 1824, and later renamed Queensland) for a few years more.
The Swan River Settlement (as Western Australia was originally known), centred on Perth, was founded in 1829. The colony suffered from a long-term shortage of labour, and by 1850 local capitalists had succeeded in persuading London to send convicts. (Transportation did not end until 1868.) New Zealand was part of New South Wales until 1840 when it became a separate colony.
The first governments established after 1788 were autocratic and each colony was governed by a British Governor, appointed by the British monarch. There was considerable unhappiness with the way some of the colonies were run. In most cases the administration of the early colonies was carried out by the British military. The New South Wales Corps, which was in charge of New South Wales, became known as the "Rum Corps", due to its stranglehold on the distribution of rum, which was used as a makeshift currency at the time. In New South Wales this led to the "Rum Rebellion". Although English law was transplanted into the Australian colonies by virtue of the doctrine of reception, thus notions of the rights and processes established by the Magna Carta and the Bill of Rights 1689 were brought from Britain by the colonists. Agitation for representative government began soon after the settlement of the colonies.[33]
The oldest legislative body in Australia, the New South Wales Legislative Council, was created in 1825 as an appointed body to advise the Governor of New South Wales. William Wentworth established the Australian Patriotic Association (Australia's first political party) in 1835 to demand democratic government for New South Wales. The reformist attorney general, John Plunkett, sought to apply Enlightenment principles to governance in the colony, pursuing the establishment of equality before the law, first by extending jury rights to emancipists, then by extending legal protections to convicts, assigned servants and Aborigines. Plunkett twice charged the colonist perpetrators of the Myall Creek massacre of Aborigines with murder, resulting in a conviction and his landmark Church Act of 1836 disestablished the Church of England and established legal equality between Anglicans, Catholics, Presbyterians and later Methodists.[34]
In 1840, the Adelaide City Council and the Sydney City Council were established. Men who possessed 1000 pounds worth of property were able to stand for election and wealthy landowners were permitted up to four votes each in elections. Australia's first parliamentary elections were conducted for the New South Wales Legislative Council in 1843, again with voting rights (for males only) tied to property ownership or financial capacity. Voter rights were extended further in New South Wales in 1850 and elections for legislative councils were held in the colonies of Victoria, South Australia and Tasmania.[35]
By the mid-19th century, there was a strong desire for representative and responsible government in the colonies of Australia, later fed by the democratic spirit of the goldfields and the ideas of the great reform movements sweeping Europe, the United States and the British Empire. The end of convict transportation accelerated reform in the 1840s and 1850s. The Australian Colonies Government Act [1850] was a landmark development which granted representative constitutions to New South Wales, Victoria, South Australia and Tasmania and the colonies enthusiastically set about writing constitutions which produced democratically progressive parliaments ÿ though the constitutions generally maintained the role of the colonial upper houses as representative of social and economic "interests" and all established constitutional monarchies with the British monarch as the symbolic head of state.[36]
The colonies relied heavily on imports from England for survival.
The official currency of the colonies was the British pound, but the unofficial currency and most readily accepted trade good was rum. During this period Australian businessmen began to prosper. For example, the partnership of Berry and Wollstonecraft made enormous profits by means of land grants, convict labour, and exporting native cedar back to England.
Since time immemorial in Australia, indigenous people had performed the rites and rituals of the animist religion of the Dreamtime. The permanent presence of Christianity in Australia however, came with the arrival of the First Fleet of British convict ships at Sydney in 1788. As a British colony, the predominant Christian denomination was the Church of England, but one-tenth of all the convicts who came to Australia on the First Fleet were Catholic, and at least half of them were born in Ireland.[37]
A small proportion of British marines were also Catholic. Some of the Irish convicts had been Transported to Australia for political crimes or social rebellion in Ireland, so the authorities were suspicious of the minority religion for the first three decades of settlement.[38] It was therefore the crew of the French explorer La Prouse who conducted the first Catholic ceremony on Australian soil in 1788 ÿ the burial of Father Louis Receveur, a Franciscan monk, who died while the ships were at anchor at Botany Bay, while on a mission to explore the Pacific.[39]
In early colonial times, Church of England clergy worked closely with the governors. Richard Johnson, Anglican chaplain to the First Fleet, was charged by the governor, Arthur Phillip, with improving "public morality" in the colony, but he was also heavily involved in health and education.[40] The Reverend Samuel Marsden (1765ÿ1838) had magisterial duties, and so was equated with the authorities by the convicts. He became known as the 'floging parson' for the severity of his punishments[41]
Catholic convicts were compelled to attend Church of England services and their children and orphans were raised by the authorities as Protestant.[42] The first Catholic priest colonists arrived in Australia as convicts in 1800 ÿ James Harold, James Dixon, and Peter O'Neill, who had been convicted for 'complicity' in the Irish 1798 Rebellion. Fr Dixon was conditionally emancipated and permitted to celebrate Mass. On 15 May 1803, in vestments made from curtains and with a chalice made of tin he conducted the first Catholic Mass in New South Wales.[42]
The Irish led Castle Hill Rebellion of 1804 alarmed the British authorities and Dixon's permission to celebrate Mass was revoked. Fr Jeremiah Flynn, an Irish Cistercian, was appointed as Prefect Apostolic of New Holland, and set out from Britain for the colony, uninvited. Watched by authorities, Flynn secretly performed priestly duties before being arrested and deported to London. Reaction to the affair in Britain led to two further priests being allowed to travel to the Colony in 1820 ÿ John Joseph Therry and Philip Connolly.[38] The foundation stone for the first St Mary's Cathedral, Sydney was laid on 29 October 1821 by Governor Lachlan Macquarie.
The absence of a Catholic mission in Australia before 1818 reflected the legal disabilities of Catholics in Britain and the difficult position of Ireland within the British Empire. The government therefore endorsed the English Benedictines to lead the early Church in the Colony.[43] The Church of England lost its legal privileges in the Colony of New South Wales by the Church Act of 1836. Drafted by the reformist attorney-general John Plunkett, the Act established legal equality for Anglicans, Catholics and Presbyterians and was later extended to Methodists.[44] Catholic missionary William Ullathorne criticised the convict system, publishing a pamphlet, The Horrors of Transportation Briefly Unfolded to the People, in Britain in 1837.[45] Laywoman Caroline Chisolm did ecumenical work to alleviate the suffering of female migrants.
Initially, education was informal, primarily occurring in the home.[citation needed] At the instigation of the then British Prime Minister, the Duke of Wellington, and with the patronage of King William IV, Australia's oldest surviving independent school, The King's School, Parramatta, was founded in 1831 as part of an effort to establish grammar schools in the colony.[46] By 1833, there were around ten Catholic schools in the Australian colonies.[38] Today one in five Australian students attend Catholic schools.[47]
Sydney's first Catholic Bishop, John Bede Polding requested a community of nuns be sent to the colony and five Irish Sisters of Charity arrived in 1838 to set about pastoral care of convict women and work in schools and hospitals before going on to found their own schools and hospitals.[48] At Polding's request, the Christian Brothers arrived in Sydney in 1843 to assist in schools. Establishing themselves first at Sevenhill, in South Australia in 1848, the Jesuits were the first religious order of priests to enter and establish houses in South Australia, Victoria, Queensland and the Northern Territory ÿ where they established schools and missions.
Some Australian folksongs date to this period.
Among the first true works of Australian literature produced over this period was the accounts of the settlement of Sydney by Watkin Tench, a captain of the marines on the First Fleet to arrive in 1788. In 1819, poet, explorer, journalist and politician William Wentworth published the first book written by an Australian: A Statistical, Historical, and Political Description of the Colony of New South Wales and Its Dependent Settlements in Van Diemen's Land, With a Particular Enumeration of the Advantages Which These Colonies Offer for Emigration and Their Superiority in Many Respects Over Those Possessed by the United States of America,[49] in which he advocated an elected assembly for New South Wales, trial by jury and settlement of Australia by free emigrants rather than convicts. In 1838 The Guardian: a tale by Anna Maria Bunn was published in Sydney. It was the first Australian novel printed and published in mainland Australia and the first Australian novel written by a woman. It is a Gothic romance.[50]
European traditions of Australian theatre also came with the First Fleet, with the first production being performed in 1789 by convicts?: The Recruiting Officer by George Farquhar.[51] Two centuries later, the extraordinary circumstances of the foundations of Australian theatre were recounted in Our Country's Good by Timberlake Wertenbaker: the participants were prisoners watched by sadistic guards and the leading lady was under threat of the death penalty. The play is based on Thomas Keneally's novel The Playmaker.[51] The Theatre Royal, Hobart, opened in 1837 and it remains the oldest theatre in Australia.[52] The Melbourne Athenaeum is one of the oldest public institutions in Australia, founded in 1839 and it served as library, school of arts and dance hall (and later became Australia's first cinema, screening The Story of the Kelly Gang, the world's first feature film in 1906).[53] The Queen's Theatre, Adelaide opened with Shakespeare in 1841 and is today the oldest theatre on the mainland.[54]
Aboriginal reactions to the sudden arrival of British settlers were varied, but often hostile when the presence of the colonisers led to competition over resources, and to the occupation by the British of Aboriginal lands. European diseases decimated Aboriginal populations, and the occupation or destruction of lands and food resources led to starvation. By contrast with New Zealand, where the Treaty of Waitangi was seen to legitimise British settlement, no treaty was signed with Aborigines, who never authorised British colonisation.
According to the historian Geoffrey Blainey, in Australia during the colonial period:
"In a thousand isolated places there were occasional shootings and spearings. Even worse, smallpox, measles, influenza and other new diseases swept from one Aboriginal camp to another... The main conqueror of Aborigines was to be disease and its ally, demoralisation".[55]
Since the 1980s, the use of the word "invasion" to describe the British colonisation of Australia has been highly controversial. According to Australian Henry Reynolds however, government officials and ordinary settlers in the eighteenth and nineteenth centuries frequently used words such as "invasion" and "warfare" to describe their presence and relations with Indigenous Australians. In his book The Other Side of the Frontier,[56] Reynolds described in detail armed resistance by Aboriginal people to white encroachments by means of guerilla warfare, beginning in the eighteenth century and continuing into the early twentieth.
In the early years of colonisation, David Collins, the senior legal officer in the Sydney settlement, wrote of local Aborigines:
While they entertain the idea of our having dispossessed them of their residences, they must always consider us as enemies; and upon this principle they [have] made a point of attacking the white people whenever opportunity and safety concurred.[57]
In 1847, Western Australian barrister E.W. Landor stated: "We have seized upon the country, and shot down the inhabitants, until the survivors have found it expedient to submit to our rule. We have acted as Julius Caesar did when he took possession of Britain."[58] In most cases, Reynolds says, Aborigines initially resisted British presence. In a letter to the Launceston Advertiser in 1831, a settler wrote:
We are at war with them: they look upon us as enemies ÿ as invaders ÿ as oppressors and persecutors ÿ they resist our invasion. They have never been subdued, therefore they are not rebellious subjects, but an injured nation, defending in their own way, their rightful possessions which have been torn from them by force.[59]
Reynolds quotes numerous writings by settlers who, in the first half of the nineteenth century, described themselves as living in fear and even in terror due to attacks by Aborigines determined to kill them or drive them off their lands. He argues that Aboriginal resistance was, in some cases at least, temporarily effective; the Aboriginal killings of men, sheep and cattle, and burning of white homes and crops, drove some settlers to ruin. Aboriginal resistance continued well beyond the middle of the nineteenth century, and in 1881 the editor of The Queenslander wrote:
During the last four or five years the human life and property destroyed by the aborigines in the North total up to a serious amount. [...] [S]ettlement on the land, and the development of the mineral and other resources on the country, have been in a great degree prohibited by the hostility of the blacks, which still continues with undiminished spirit.[60]
Reynolds argues that continuous Aboriginal resistance for well over a century belies the "myth" of peaceful settlement in Australia. Settlers in turn often reacted to Aboriginal resistance with great violence, resulting in numerous indiscriminate massacres by whites of Aboriginal men, women and children.[61] Among the most famous massacres of the early nineteenth century were the Pinjarra massacre and the Myall Creek massacre.
Famous Aborigines who resisted British colonisation in the eighteenth and early nineteenth centuries include Pemulwuy and Yagan. In Tasmania, the "Black War" was fought in the first half of the nineteenth century.
History of Australia:
How many union states were there in the civil war?
20 free states, 4 border and slave states🚨During the American Civil War (1861ÿ1865), the Union referred to the United States of America and specifically to the national government of President Abraham Lincoln and the 20 free states, 4 border and slave states (some with split governments and troops sent both north and south) that supported it. The Union was opposed by 11 southern slave states (or 13, according to the Southern view and one western territory) that formed the Confederate States of America, or also known as "the Confederacy".
All of the Union's states provided soldiers for the United States Army (also known as the Union Army), though the border areas also sent tens of thousands of soldiers south into the Confederacy. The Border states were essential as a supply base for the Union invasion of the Confederacy, and Lincoln realized he could not win the war without control of them, especially Maryland, which lay north of the national capital of Washington, D.C.. The Northeast and upper Midwest provided the industrial resources for a mechanized war producing large quantities of munitions and supplies, as well as financing for the war. The Midwest provided soldiers, food, horses, financial support, and training camps. Army hospitals were set up across the Union. Most states had Republican Party governors who energetically supported the war effort and suppressed anti-war subversion in 1863ÿ64. The Democratic Party strongly supported the war at the beginning in 1861 but by 1862, was split between the War Democrats and the anti-war element led by the "Copperheads". The Democrats made major electoral gains in 1862 in state elections, most notably in New York. They lost ground in 1863, especially in Ohio. In 1864, the Republicans campaigned under the National Union Party banner, which attracted many War Democrats and soldiers and scored a landslide victory for Lincoln and his entire ticket against opposition candidate George B. McClellan, former General-in-Chief of the Union Army and its eastern Army of the Potomac.
The war years were quite prosperous except where serious fighting and guerrilla warfare took place along the southern border. Prosperity was stimulated by heavy government spending and the creation of an entirely new national banking system. The Union states invested a great deal of money and effort in organizing psychological and social support for soldiers' wives, widows, and orphans, and for the soldiers themselves. Most soldiers were volunteers, although after 1862 many volunteered in order to escape the draft and to take advantage of generous cash bounties on offer from states and localities. Draft resistance was notable in some larger cities, especially New York City with its massive anti-draft riots of July 1863 and in some remote districts such as the coal mining areas of Pennsylvania.
In the context of the American Civil War, the Union is sometimes referred to as "the North", both then and now, as opposed to the Confederacy, which was "the South". The Union never recognized the legitimacy of the Confederacy's secession and maintained at all times that it remained entirely a part of the United States of America. In foreign affairs the Union was the only side recognized by all other nations, none of which officially recognized the Confederate government. The term "Union" occurs in the first governing document of the United States, the Articles of Confederation and Perpetual Union. The subsequent Constitution of 1787 was issued and ratified in the name not of the states, but of "We the People of the United States, in Order to form a more perfect Union ...". Union, for the United States of America, is then repeated in such clauses as the Admission to the Union clause in Article IV, Section 3.
Even before the war started, the phrase "preserve the Union" was commonplace, and a "union of states" had been used to refer to the entire United States of America. Using the term "Union" to apply to the non-secessionist side carried a connotation of legitimacy as the continuation of the pre-existing political entity.[1]
Confederates generally saw the Union states as being opposed to slavery, occasionally referring to them as abolitionists, as in reference to the U.S. Navy as the "Abolition fleet" and the U.S. Army as the "Abolition forces".[2]
Unlike the Confederacy, the Union had a large industrialized and urbanized area (the Northeast), and more advanced commercial, transportation and financial systems than the rural South.[3] Additionally, the Union states had a manpower advantage of 5 to 2 at the start of the war.[4]
Year by year, the Confederacy shrank and lost control of increasing quantities of resources and population. Meanwhile, the Union turned its growing potential advantage into a much stronger military force. However, much of the Union strength had to be used to garrison conquered areas, and to protect railroads and other vital points. The Union's great advantages in population and industry would prove to be vital long-term factors in its victory over the Confederacy, but it took the Union a long while to fully mobilize these resources.
The attack on Fort Sumter rallied the North to the defense of American nationalism. Historian, Allan Nevins, says:
The thunderclap of Sumter produced a startling crystallization of Northern sentiment ... Anger swept the land. From every side came news of mass meetings, speeches, resolutions, tenders of business support, the muster of companies and regiments, the determined action of governors and legislatures.[5]
McClintock states:
At the time, Northerners were right to wonder at the near unanimity that so quickly followed long months of bitterness and discord. It would not last throughout the protracted war to come ÿ or even through the year ÿ but in that moment of unity was laid bare the common Northern nationalism usually hidden by the fierce battles more typical of the political arena."[6]
Historian Michael Smith, argues that, as the war ground on year after year, the spirit of American republicanism grew stronger and generated fears of corruption in high places. Voters became afraid of power being centralized in Washington, extravagant spending, and war profiteering. Democratic candidates emphasized these fears. The candidates added that rapid modernization was putting too much political power in the hands of Eastern financiers and industrialists. They warned that the abolition of slavery would bring a flood of freed blacks into the labor market of the North.
Republicans responded with claims of defeatism. They indicted Copperheads for criminal conspiracies to free Confederate prisoners of war, and played on the spirit of nationalism and the growing hatred of the slaveowners, as the guilty party in the war.[7]
Historians have overwhelmingly praised the "political genius" of Abraham Lincoln's performance as President.[8] His first priority was military victory. This required that he master entirely new skills as a strategist and diplomat. He oversaw supplies, finances, manpower, the selection of generals, and the course of overall strategy. Working closely with state and local politicians, he rallied public opinion and (at Gettysburg) articulated a national mission that has defined America ever since. Lincoln's charm and willingness to cooperate with political and personal enemies made Washington work much more smoothly than Richmond, the Confederate capital, and his wit smoothed many rough edges. Lincoln's cabinet proved much stronger and more efficient than Davis's, as Lincoln channeled personal rivalries into a competition for excellence rather than mutual destruction. With William Seward at State, Salmon P. Chase at the Treasury, and (from 1862) Edwin Stanton at the War Department, Lincoln had a powerful cabinet of determined men. Except for monitoring major appointments and decisions, Lincoln gave them free rein to end the Confederate rebellion.[9] Cabinet (government)
The Republican Congress passed many major laws that reshaped the nation's economy, financial system, tax system, land system, and higher education system. These included: the Morrill tariff, the Homestead Act, the Pacific Railroad Act, and the National Banking Act.[10] Lincoln paid relatively little attention to this legislation as he focused on war issues but he worked smoothly with powerful Congressional leaders such as Thaddeus Stevens (on taxation and spending), Charles Sumner (on foreign affairs), Lyman Trumbull (on legal issues), Justin Smith Morrill (on land grants and tariffs) and William Pitt Fessenden (on finances).[11]
Military and reconstruction issues were another matter. Lincoln, as the leader of the moderate and conservative factions of the Republican Party, often crossed swords with the Radical Republicans, led by Stevens and Sumner. Author, Bruce Tap, shows that Congress challenged Lincoln's role as commander-in-chief through the Joint Committee on the Conduct of the War. It was a joint committee of both houses that was dominated by the Radical Republicans, who took a hard line against the Confederacy. During the 37th and 38th Congresses, the committee investigated every aspect of Union military operations, with special attention to finding commanders culpable for military defeats. It assumed an inevitable Union victory. Failure was perceived to indicate evil motivations or personal failures. The committee distrusted graduates of the US Military Academy at West Point, since many of the academy's alumni were leaders of the enemy army. Members of the committee much preferred political generals with a satisfactory political record. Some of the committee suggested that West-Pointers who engaged in strategic maneuver were cowardly or even disloyal. It ended up endorsing incompetent but politically correct generals.[12]
The opposition came from Copperhead Democrats, who were strongest in the Midwest and wanted to allow Confederate secession. In the East, opposition to the war was strongest among Irish Catholics, but also included business interests connected to the South typified by August Belmont. The Democratic Party was deeply split. In 1861 most Democrats supported the war. However, the party increasingly split down the middle between the moderates who supported the war effort, and the peace element, including Copperheads, who did not. It scored major gains in the 1862 elections, and elected the moderate Horatio Seymour as governor of New York. They gained 28 seats in the House of Representatives but Republicans retained control of both the House and the Senate.
The 1862 election for the Indiana legislature was especially hard-fought. Though the Democrats gained control of the legislature, they were unable to impede the war effort. Republican Governor Oliver P. Morton was able to maintain control of the state's contribution to the war effort despite the Democrat majority.[13] Washington was especially helpful in 1864 in arranging furloughs to allow Hoosier soldiers to return home so they could vote in elections.[14] Across the North in 1864, the great majority of soldiers voted Republican. Men who had been Democrats before the war often abstained or voted Republican.[15]
As the federal draft laws tightened, there was serious unrest among Copperhead strongholds, such as the Irish in the Pennsylvania coal mining districts. The government needed the coal more than the draftees, so it ignored the largely non-violent draft dodging there.[16][17] The violent New York City draft riots of 1863 were suppressed by the U.S. Army firing grape shot down cobblestone city streets.[18][19]
The Democrats nominated George McClellan, a War Democrat for the 1864 presidential election but gave him an anti-war platform. In terms of Congress the opposition against the war was nearly powerless ÿ as was the case in most states. In Indiana and Illinois pro-war governors circumvented anti-war legislatures elected in 1862. For 30 years after the war the Democrats carried the burden of having opposed the martyred Lincoln, who was viewed by many as the salvation of the Union and the destroyer of slavery.[20]
The Copperheads were a large faction of northern Democrats who opposed the war, demanding an immediate peace settlement. They said they wanted to restore "the Union as it was" (that is, with the South and with slavery) but they realized that the Confederacy would never voluntarily rejoin the U.S.[21] The most prominent Copperhead was Ohio's Clement L. Vallandigham, a Congressman and leader of the Democratic Party in Ohio. He was defeated in an intense election for governor in 1863. Republican prosecutors in the Midwest accused some Copperhead activists of treason in a series of trials in 1864.[22]
Copperheadism was a grassroots movement, strongest in the area just north of the Ohio River, as well as some urban ethnic wards. Some historians have argued that it represented a traditionalistic element alarmed at the rapid modernization of society sponsored by the Republican Party. It looked back to Jacksonian Democracy for inspiration ÿ with ideals that promoted an agrarian rather than industrialized concept of society. Weber (2006) argues that the Copperheads damaged the Union war effort by fighting the draft, encouraging desertion and forming conspiracies.[23] However, other historians say the Copperheads were a legitimate opposition force unfairly treated by the government, adding that the draft was in disrepute and that the Republicans greatly exaggerated the conspiracies for partisan reasons.[24] Copperheadism was a major issue in the 1864 presidential election ÿ its strength waxed when Union armies were doing poorly and waned when they won great victories. After the fall of Atlanta in September 1864, military success seemed assured and Copperheadism collapsed.[21]
Enthusiastic young men clamored to join the Union army in 1861. They came with family support for reasons of patriotism and excitement. Washington decided to keep the small regular army intact; it only had 16,000 men and was needed to guard the frontier. Its officers could, however, join the temporary new volunteer army that was formed, with expectations that their experience would lead to rapid promotions. The problem with volunteering, however, was its serious lack of planning, leadership, and organization at the highest levels. Washington called on the states for troops, and every northern governor set about raising and equipping regiments, and sent the bills to the War Department. The men could elect the junior officers, while the governor appointed the senior officers, and Lincoln appointed the generals. Typically, politicians used their local organizations to raise troops and were in line (if healthy enough) to become colonel. The problem was that the War Department, under the disorganized leadership of Simon Cameron, also authorized local and private groups to raise regiments. The result was widespread confusion and delay.
Pennsylvania, for example, had acute problems. When Washington called for 10 more regiments, enough men volunteered to form 30. However, they were scattered among 70 different new units, none of them a complete regiment. Not until Washington approved gubernatorial control of all new units was the problem resolved. Allan Nevins is particularly scathing of this in his analysis: "A President more exact, systematic and vigilant than Lincoln, a Secretary more alert and clearheaded than Cameron, would have prevented these difficulties."[25]
By the end of 1861, 700,000 soldiers were drilling in Union camps. The first wave in spring was called up for only 90 days, then the soldiers went home or reenlisted. Later waves enlisted for three years.
The new recruits spent their time drilling in company and regiment formations. The combat in the first year, though strategically important, involved relatively small forces and few casualties. Sickness was a much more serious cause of hospitalization or death.
In the first few months, men wore low quality uniforms made of "shoddy" material, but by fall, sturdy wool uniformsin bluewere standard. The nation's factories were converted to produce the rifles, cannons, wagons, tents, telegraph sets, and the myriad of other special items the army needed.
While business had been slow or depressed in spring 1861, because of war fears and Southern boycotts, by fall business was hiring again, offering young men jobs that were an alternative way to help win the war. Nonpartisanship was the rule in the first year, but by summer 1862, many Democrats had stopped supporting the war effort, and volunteering fell off sharply in their strongholds.
The calls for more and more soldiers continued, so states and localities responded by offering cash bonuses. By 1863, a draft law was in effect, but few men actually were drafted and served, since the law was designed to get them to volunteer or hire a substitute. Others hid away or left the country. With the Emancipation Proclamation taking effect in January 1863, localities could meet their draft quota by sponsoring regiments of ex-slaves organized in the South.[26]
Michigan was especially eager to send thousands of volunteers.[27] A study of the cities of Grand Rapids and Niles shows an overwhelming surge of nationalism in 1861, whipping up enthusiasm for the war in all segments of society, and all political, religious, ethnic, and occupational groups. However, by 1862 the casualties were mounting, and the war was increasingly focused on freeing the slaves in addition to preserving the Union. Copperhead Democrats called the war a failure, and it became an increasingly partisan Republican effort.[28] Michigan voters remained evenly split between the parties in the presidential election of 1864.[29]
Perman (2010) says historians are of two minds on why millions of men seemed so eager to fight, suffer, and die over four years:
Some historians emphasize that Civil War soldiers were driven by political ideology, holding firm beliefs about the importance of liberty, Union, or state rights, or about the need to protect or to destroy slavery. Others point to less overtly political reasons to fight, such as the defense of one's home and family, or the honor and brotherhood to be preserved when fighting alongside other men. Most historians agree that, no matter what he thought about when he went into the war, the experience of combat affected him profoundly and sometimes affected his reasons for continuing to fight.[30]
On the whole, the national, state, and local governments handled the avalanche of paperwork effectively. Skills developed in insurance and financial companies formed the basis of systematic forms, copies, summaries, and filing systems used to make sense of masses of human data. The leader in this effort, John Shaw Billings, later developed a system of mechanically storing, sorting, and counting numerical information using punch cards. Nevertheless, old-fashioned methodology had to be recognized and overcome. An illustrative case study came in New Hampshire, where the critical post of state adjutant general was held in 1861ÿ64 by elderly politician Anthony C. Colby (1792ÿ1873) and his son Daniel E. Colby (1816ÿ1891). They were patriotic, but were overwhelmed with the complexity of their duties. The state lost track of men who enlisted after 1861; it had no personnel records or information on volunteers, substitutes, or draftees, and there was no inventory of weaponry and supplies. Nathaniel Head (1828ÿ1883) took over in 1864, obtained an adequate budget and office staff, and reconstructed the missing paperwork. As result, widows, orphans, and disabled veterans received the postwar payments they had earned.[31]
More soldiers died of disease than from battle injuries, and even larger numbers were temporarily incapacitated by wounds, disease, and accidents. The Union responded by building army hospitals in every state.
The hygiene of the camps was poor, especially at the beginning of the war when men who had seldom been far from home were brought together for training with thousands of strangers. First came epidemics of the childhood diseases of chicken pox, mumps, whooping cough, and especially, measles. Operations in the South meant a dangerous and new disease environment, bringing diarrhea, dysentery, typhoid fever, and malaria. There were no antibiotics, so the surgeons prescribed coffee, whiskey, and quinine. Harsh weather, bad water, inadequate shelter in winter quarters, poor policing of camps, and dirty camp hospitals took their toll.[32] This was a common scenario in wars from time immemorial, and conditions faced by the Confederate army were even worse. What was different in the Union was the emergence of skilled, well-funded medical organizers who took proactive action, especially in the much enlarged United States Army Medical Department,[33] and the United States Sanitary Commission, a new private agency.[34] Numerous other new agencies also targeted the medical and morale needs of soldiers, including the United States Christian Commission, as well as smaller private agencies, such as the Women's Central Association of Relief for Sick and Wounded in the Army (WCAR), founded in 1861 by Henry Whitney Bellows, a Unitarian minister, and the social reformer Dorothea Dix. Systematic funding appeals raised public consciousness as well as millions of dollars. Many thousands of volunteers worked in the hospitals and rest homes, most famously poet Walt Whitman. Frederick Law Olmsted, a famous landscape architect, was the highly efficient executive director of the Sanitary Commission.[35]
States could use their own tax money to support their troops, as Ohio did. Under the energetic leadership of Governor David Tod, a War Democrat who won office on a coalition "Union Party" ticket with Republicans, Ohio acted vigorously. Following the unexpected carnage at the battle of Shiloh in April 1862, Ohio sent three steamboats to the scene as floating hospitals equipped with doctors, nurses, and medical supplies. The state fleet expanded to 11 hospital ships, and the state set up 12 local offices in main transportation nodes, to help Ohio soldiers moving back and forth.[36]
The Christian Commission comprised 6,000 volunteers who aided chaplains in many ways.[37] For example, its agents distributed Bibles, delivered sermons, helped with sending letters home, taught men to read and write, and set up camp libraries.[38]
The Army learned many lessons and modernized its procedures,[39] and medical scienceespecially surgerymade many advances.[40] In the long run, the wartime experiences of the numerous Union commissions modernized public welfare, and set the stage for largescale community philanthropy in America based on fund raising campaigns and private donations.[41]
Additionally, women gained new public roles. For example, Mary Livermore (1820ÿ1905), the manager of the Chicago branch of the US Sanitary Commission, used her newfound organizational skills to mobilize support for women's suffrage after the war. She argued that women needed more education and job opportunities to help them fulfill their role of serving others.[42]
The Sanitary Commission collected enormous amounts of statistical data, and opened up the problems of storing information for fast access and mechanically searching for data patterns.[43] The pioneer was John Shaw Billings (1838ÿ1913). A senior surgeon in the war, Billings built two of the world's most important libraries, Library of the Surgeon General's Office (now the National Library of Medicine) and the New York Public Library; he also figured out how to mechanically analyze data by turning it into numbers and punching onto the computer punch card as developed by his student Herman Hollerith. Hollerith's company became International Business Machines (IBM) in 1911.[44]
Both sides operated prison camps; they handled about 400,000 captives, but many other prisoners were quickly released and never sent to camps. The Record and Pension Office in 1901 counted 211,000 Northerners who were captured. In 1861ÿ63 most were immediately paroled; after the parole exchange system broke down in 1863, about 195,000 went to Confederate prison camps. Some tried to escape but few succeeded. By contrast 464,000 Confederates were captured (many in the final days) and 215,000 imprisoned. Over 30,000 Union and nearly 26,000 Confederate prisoners died in captivity. Just over 12% of the captives in Northern prisons died, compared to 15.5% for Southern prisons.[45][46]
Discontent with the 1863 draft law led to riots in several cities and in rural areas as well. By far the most important were the New York City draft riots of July 13 to July 16, 1863.[47] Irish Catholic and other workers fought police, militia and regular army units until the Army used artillery to sweep the streets. Initially focused on the draft, the protests quickly expanded into violent attacks on blacks in New York City, with many killed on the streets.[48]
Small-scale riots broke out in ethnic German and Irish districts, and in areas along the Ohio River with many Copperheads. Holmes County, Ohio was an isolated parochial area dominated by Pennsylvania Dutch and some recent German immigrants. It was a Democratic stronghold and few men dared speak out in favor of conscription. Local politicians denounced Lincoln and Congress as despotic, seeing the draft law as a violation of their local autonomy. In June 1863, small-scale disturbances broke out; they ended when the Army sent in armed units.[49][50][51]
The Union economy grew and prospered during the war while fielding a very large army and navy.[52] The Republicans in Washington had a Whiggish vision of an industrial nation, with great cities, efficient factories, productive farms, all national banks, all knit together by a modern railroad system, to be mobilized by the United States Military Railroad. The South had resisted policies such as tariffs to promote industry and homestead laws to promote farming because slavery would not benefit. With the South gone and Northern Democrats weak, the Republicans enacted their legislation. At the same time they passed new taxes to pay for part of the war and issued large amounts of bonds to pay for most of the rest. Economic historians attribute the remainder of the cost of the war to inflation. Congress wrote an elaborate program of economic modernization that had the dual purpose of winning the war and permanently transforming the economy.[53] For a list of the major industrialists see .
In 1860 the Treasury was a small operation that funded the small-scale operations of the government through land sales and customs based on a low tariff.[54] Peacetime revenues were trivial in comparison with the cost of a full-scale war but the Treasury Department under Secretary Salmon P. Chase showed unusual ingenuity in financing the war without crippling the economy.[55] Many new taxes were imposed and always with a patriotic theme comparing the financial sacrifice to the sacrifices of life and limb. The government paid for supplies in real money, which encouraged people to sell to the government regardless of their politics. By contrast the Confederacy gave paper promissory notes when it seized property, so that even loyal Confederates would hide their horses and mules rather than sell them for dubious paper. Overall the Northern financial system was highly successful in raising money and turning patriotism into profit, while the Confederate system impoverished its patriots.[56]
The United States needed $3.1 billion to pay for the immense armies and fleets raised to fight the Civil Warover $400?million just in 1862 alone.[57] Apart from tariffs, the largest revenue by far came from new excise taxesa sort of value added taxthat was imposed on every sort of manufactured item. Second came much higher tariffs, through several Morrill tariff laws. Third came the nation's first income tax; only the wealthy paid and it was repealed at war's end.
Apart from taxes, the second major source of income was government bonds. For the first time bonds in small denominations were sold directly to the people, with publicity and patriotism as key factors, as designed by banker Jay Cooke. State banks lost their power to issue banknotes. Only national banks could do that and Chase made it easy to become a national bank; it involved buying and holding federal bonds and financiers rushed to open these banks. Chase numbered them, so that the first one in each city was the "First National Bank".[58] Third, the government printed paper money called "greenbacks". They led to endless controversy because they caused inflation.[59]
The North's most important war measure was perhaps the creation of a system of national banks that provided a sound currency for the industrial expansion. Even more important, the hundreds of new banks that were allowed to open were required to purchase government bonds. Thereby the nation monetized the potential wealth represented by farms, urban buildings, factories, and businesses, and immediately turned that money over to the Treasury for war needs.[60]
Secretary Chase, though a long-time free-trader, worked with Morrill to pass a second tariff bill in summer 1861, raising rates another 10?points in order to generate more revenues.[61] These subsequent bills were primarily revenue driven to meet the war's needs, though they enjoyed the support of protectionists such as Carey, who again assisted Morrill in the bill's drafting. The Morrill Tariff of 1861 was designed to raise revenue. The tariff act of 1862 served not only to raise revenue but also to encourage the establishment of factories free from British competition by taxing British imports. Furthermore, it protected American factory workers from low paid European workers, and as a major bonus attracted tens of thousands of those Europeans to immigrate to America for high wage factory and craftsman jobs.[62]
Customs revenue from tariffs totaled $345?million from 1861 through 1865 or 43% of all federal tax revenue.
The U.S. government owned vast amounts of good land (mostly from the Louisiana Purchase of 1803 and the Oregon Treaty with Britain in 1846). The challenge was to make the land useful to people and to provide the economic basis for the wealth that would pay off the war debt. Land grants went to railroad construction companies to open up the western plains and link up to California. Together with the free lands provided farmers by the Homestead Law the low-cost farm lands provided by the land grants sped up the expansion of commercial agriculture in the West.
The 1862 Homestead Act opened up the public domain lands for free. Land grants to the railroads meant they could sell tracts for family farms (80 to 200 acres) at low prices with extended credit. In addition the government sponsored fresh information, scientific methods and the latest techniques through the newly established Department of Agriculture and the Morrill Land Grant College Act.[63][64]
Agriculture was the largest single industry and it prospered during the war.[65][66] Prices were high, pulled up by a strong demand from the army and from Britain (which depended on American wheat for a fourth of its food imports). The war acted as a catalyst that encouraged the rapid adoption of horse-drawn machinery and other implements. The rapid spread of recent inventions such as the reaper and mower made the work force efficient, even as hundreds of thousands of farmers were in the army. Many wives took their place and often consulted by mail on what to do; increasingly they relied on community and extended kin for advice and help.[67]
The Union used hundreds of thousands of animals. The Army had plenty of cash to purchase them from farmers and breeders but especially in the early months the quality was mixed.[68] Horses were needed for cavalry and artillery.[69] Mules pulled the wagons. The supply held up, despite an unprecedented epidemic of glanders, a fatal disease that baffled veterinarians.[70] In the South, the Union army shot all the horses it did not need to keep them out of Confederate hands.
The Treasury started buying cotton during the war, for shipment to Europe and northern mills. The sellers were Southern planters who needed the cash, regardless of their patriotism. The Northern buyers could make heavy profits, which annoyed soldiers like Ulysses Grant. He blamed Jewish traders and expelled them from his lines in 1862 but Lincoln quickly overruled this show of anti-semitism. Critics said the cotton trade helped the South, prolonged the war and fostered corruption. Lincoln decided to continue the trade for fear that Britain might intervene if its textile manufacturers were denied raw material. Another goal was to foster latent Unionism in Southern border states. Northern textile manufacturers needed cotton to remain in business and to make uniforms, while cotton exports to Europe provided an important source of gold to finance the war.[71]
The Protestant religion was quite strong in the North in the 1860s. The United States Christian Commission sent agents into the Army camps to provide psychological support as well as books, newspapers, food and clothing. Through prayer, sermons and welfare operations, the agents ministered to soldiers' spiritual as well as temporal needs as they sought to bring the men to a Christian way of life.[72] Most churches made an effort to support their soldiers in the field and especially their families back home. Much of the political rhetoric of the era had a distinct religious tone.[73]
The Protestant clergy in America took a variety of positions. In general, the pietistic denominations such as the Methodists, Northern Baptists and Congregationalists strongly supported the war effort. Catholics, Episcopalians, Lutherans and conservative Presbyterians generally avoided any discussion of the war, so it would not bitterly divide their membership. The Quakers, while giving strong support to the abolitionist movement on a personal level, refused to take a denominational position. Some clergymen who supported the Confederacy were denounced as Copperheads, especially in the border regions.[74][75]
Many Northerners had only recently become religious (following the Second Great Awakening) and religion was a powerful force in their lives. No denomination was more active in supporting the Union than the Methodist Episcopal Church. Carwardine[76] argues that for many Methodists, the victory of Lincoln in 1860 heralded the arrival of the kingdom of God in America. They were moved into action by a vision of freedom for slaves, freedom from the persecutions of godly abolitionists, release from the Slave Power's evil grip on the American government and the promise of a new direction for the Union.[76] Methodists formed a major element of the popular support for the Radical Republicans with their hard line toward the white South. Dissident Methodists left the church.[77] During Reconstruction the Methodists took the lead in helping form Methodist churches for Freedmen and moving into Southern cities even to the point of taking control, with Army help, of buildings that had belonged to the southern branch of the church.[78][79]
The Methodist family magazine Ladies' Repository promoted Christian family activism. Its articles provided moral uplift to women and children. It portrayed the War as a great moral crusade against a decadent Southern civilization corrupted by slavery. It recommended activities that family members could perform in order to aid the Union cause.[80]
Historian Stephen M. Frank reports that what it meant to be a father varied with status and age. He says most men demonstrated dual commitments as providers and nurturers and believed that husband and wife had mutual obligations toward their children. The war privileged masculinity, dramatizing and exaggerating, father-son bonds. Especially at five critical stages in the soldier's career (enlistment, blooding, mustering out, wounding and death) letters from absent fathers articulated a distinctive set of 19th-century ideals of manliness.[81]
There were numerous children's magazines such as Merry's Museum, The Student and Schoolmate, Our Young Folks, The Little Pilgrim, Forrester's Playmate, and The Little Corporal. They showed a Protestant religious tone and "promoted the principles of hard work, obedience, generosity, humility, and piety; trumpeted the benefits of family cohesion; and furnished mild adventure stories, innocent entertainment, and instruction".[82] Their pages featured factual information and anecdotes about the war along with related quizzes, games, poems, songs, short oratorical pieces for "declamation", short stories and very short plays that children could stage. They promoted patriotism and the Union war aims, fostered kindly attitudes toward freed slaves, blackened the Confederates cause, encouraged readers to raise money for war-related humanitarian funds, and dealt with the death of family members.[83] By 1866, the Milton Bradley Company was selling "The Myriopticon: A Historical Panorama of the Rebellion" that allowed children to stage a neighborhood show that would explain the war. It comprised colorful drawings that were turned on wheels and included pre-printed tickets, poster advertisements, and narration that could be read aloud at the show.[84]
Caring for war orphans was an important function for local organizations as well as state and local government.[85] A typical state was Iowa, where the private "Iowa Soldiers Orphans Home Association" operated with funding from the legislature and public donations. It set up orphanages in Davenport, Glenwood and Cedar Falls. The state government funded pensions for the widows and children of soldiers.[86] Orphan schools like the Pennsylvania Soldiers' Orphan School, also spoke of the broader public welfare experiment that began as part of the aftermath of the Civil War. These orphan schools were created to provide housing, care, and education for orphans of Civil War soldiers. They became a matter of state pride, with orphans were paraded around at rallies to display the power of a patriotic schooling.[87]
All the northern states had free public school systems before the war but not the border states. West Virginia set up its system in 1863. Over bitter opposition it established an almost-equal education for black children, most of whom were ex-slaves.[88] Thousands of black refugees poured into St. Louis, where the Freedmen's Relief Society, the Ladies Union Aid Society, the Western Sanitary Commission, and the American Missionary Association (AMA) set up schools for their children.[89]
People loyal to the U.S. federal government and opposed to secession living in the border states (where slavery was legal in 1861) were termed Unionists. Confederates sometimes styled them "Homemade Yankees". However, Southern Unionists were not necessarily northern sympathizers and many of them, although opposing secession, supported the Confederacy once it was a fact. East Tennessee never supported the Confederacy, and Unionists there became powerful state leaders, including governors Andrew Johnson and William G. Brownlow. Likewise, large pockets of eastern Kentucky were Unionist and helped keep the state from seceding.[90] Western Virginia, with few slaves and some industry, was so strongly Unionist that it broke away and formed the new state of West Virginia.[91]
Still, nearly 120,000 Unionists from the South served in the Union Army during the Civil War and Unionist regiments were raised from every Confederate state except South Carolina. Among such units was the 1st Alabama Cavalry Regiment, which served as William Sherman's personal escort on his march to the sea. Southern Unionists were extensively used as anti-guerrilla paramilitary forces.[92] During Reconstruction many of these Unionists became "Scalawags", a derogatory term for Southern supporters of the Republican Party.[93]
Besides organized military conflict, the border states were beset by guerrilla warfare. In such a bitterly divided state, neighbors frequently used the excuse of war to settle personal grudges and took up arms against neighbors.
Missouri was the scene of over 1000 engagements between Union and Confederate forces, and uncounted numbers of guerrilla attacks and raids by informal pro-Confederate bands.[94] Western Missouri was the scene of brutal guerrilla warfare during the Civil War. Roving insurgent bands such as Quantrill's Raiders and the men of Bloody Bill Anderson terrorized the countryside, striking both military installations and civilian settlements. Because of the widespread attacks and the protection offered by Confederate sympathizers, Federal leaders issued General Order No. 11 in 1863, and evacuated areas of Jackson, Cass, and Bates counties. They forced the residents out to reduce support for the guerrillas. Union cavalry could sweep through and track down Confederate guerrillas, who no longer had places to hide and people and infrastructure to support them. On short notice, the army forced almost 20,000 people, mostly women, children and the elderly, to leave their homes. Many never returned and the affected counties were economically devastated for years after the end of the war.[95] Families passed along stories of their bitter experiences down through several generations ÿ Harry Truman's grandparents were caught up in the raids and he would tell of how they were kept in concentration camps.[96]
Some marauding units became organized criminal gangs after the war. In 1882, the bank robber and ex-Confederate guerrilla Jesse James was killed in Saint Joseph. Vigilante groups appeared in remote areas where law enforcement was weak, to deal with the lawlessness left over from the guerrilla warfare phase. For example, the Bald Knobbers were the term for several law-and-order vigilante groups in the Ozarks. In some cases, they too turned to illegal gang activity.[97]
In response to the growing problem of locally organized guerrilla campaigns throughout 1863 and 1864, in June 1864, Maj. Gen. Stephen G. Burbridge was given command over the state of Kentucky. This began an extended period of military control that would last through early 1865, beginning with martial law authorized by President Abraham Lincoln. To pacify Kentucky, Burbridge rigorously suppressed disloyalty and used economic pressure as coercion. His guerrilla policy, which included public execution of four guerrillas for the death of each unarmed Union citizen, caused the most controversy. After a falling out with Governor Thomas E. Bramlette, Burbridge was dismissed in February 1865. Confederates remembered him as the "Butcher of Kentucky".[98]
List of Wikipedia articles on Union states and major cities:
* Border states with slavery in 1861
?Had two state governments, one Unionist one Confederate, both claiming to be the legitimate government of their state. Kentucky and Missouri's Confederate governments never had significant control of their state.
West Virginia separated from Virginia and became part of the Union during the war, on June 20, 1863. Nevada also joined the Union during the war, becoming a state on October 31, 1864.
The Union controlled territories in April 1861 were:[99]
The Indian Territory saw its own civil war, as the major tribes held slaves and endorsed the Confederacy.[100]
What is the origin of the name edith?
derived from the Old English words ad, meaning 'riches or blessed', and ??e, meaning 'war'🚨Edith is a female given name, derived from the Old English words ad, meaning 'riches or blessed', and ??e, meaning 'war',[1] and is in common usage in this form in English, German, many Scandinavian languages and Dutch. Its French form, also a common name in French, is dith. Contractions and variations of this name include Ditte, Edie and Edythe.
It was a common first name prior to the 16th century, when it fell out of favour. It became popular again at the beginning of the 19th century, and in 2016 it was ranked at 488th most popular female name in the United States, according to the Social Security online database.[2] It became far less common as a name for children by the late 20th century.
The name Edith has four name days: May 14 in Estonia, October 31 in Sweden, July 5 in Latvia, and September 16 in France.
Who is the current ruler of the netherlands?
Willem-Alexander🚨HRH Princess Beatrix *
HRH Princess Margriet *
Professor Pieter van Vollenhoven *
HRH Princess Christina
Willem-Alexander (Dutch:?[???l?m a?l?k?snd?r]; born Willem-Alexander Claus George Ferdinand, 27 April 1967) is the King of the Netherlands, having ascended the throne following his mother's abdication in 2013.
Willem-Alexander was born in Utrecht and is the oldest child of Princess Beatrix and diplomat Claus van Amsberg. He became Prince of Orange as heir apparent upon his mother's accession as Queen on 30 April 1980, and succeeded her following her abdication on 30 April 2013. He went to public primary and secondary schools, served in the Royal Netherlands Navy, and studied history at Leiden University. He married Mxima Zorreguieta Cerruti in 2002 and they have three daughters: Catharina-Amalia, Princess of Orange (born 2003), Princess Alexia (born 2005), and Princess Ariane (born 2007).
Willem-Alexander is interested in sports and international water management issues. Until his accession to the throne, he was a member of the International Olympic Committee (1998ÿ2013),[1] chairman of the Advisory Committee on Water to the Dutch Minister of Infrastructure and the Environment (2004ÿ2013),[2] and chairman of the Secretary-General of the United Nations' Advisory Board on Water and Sanitation (2006ÿ2013).[3][4] At the age of 51, he is currently the second youngest monarch in Europe after Felipe VI of Spain.
Willem-Alexander Claus George Ferdinand was born on 27 April 1967 in the Utrecht University Hospital, Now the University Medical Center Utrecht in Utrecht, Netherlands. He is the first child of Princess Beatrix and Prince Claus,[5] and the first grandchild of Queen Juliana and Prince Bernhard. He was the first male Dutch royal baby since the birth of Prince Alexander in 1851, and the first immediate male heir since Alexander's death in 1884.
From birth, Willem-Alexander has held the titles Prince of the Netherlands (Dutch: Prins der Nederlanden), Prince of Orange-Nassau (Dutch: Prins van Oranje-Nassau), and Jonkheer of Amsberg (Dutch: Jonkheer van Amsberg).[5] He was baptised as a member of the Dutch Reformed Church[6] on 2 September 1967[7] in Saint Jacob's Church in The Hague.[8] His godparents are Prince Bernhard of Lippe-Biesterfeld, G?sta Freiin von dem Bussche-Haddenhausen, Ferdinand von Bismarck, former Prime Minister Jelle Zijlstra, Jonkvrouw Rene R?ell, and Queen Margrethe II of Denmark.[7]
He had two younger brothers: Prince Friso (1968ÿ2013) and Prince Constantijn (born in 1969). He lived with his family at the castle Drakensteyn in the hamlet Lage Vuursche near Baarn from his birth until 1981, when they moved to the larger palace Huis ten Bosch in The Hague. His mother Beatrix became Queen of the Netherlands in 1980, after his grandmother Juliana abdicated. He then received the title of Prince of Orange as heir apparent to the throne of the Kingdom of the Netherlands.[5]
Willem-Alexander attended Nieuwe Baarnse Elementary School in Baarn from 1973 to 1979. He went to three different secondary schools: the Baarns Lyceum in Baarn from 1979 to 1981, the Eerste Vrijzinnig Christelijk Lyceum in The Hague from 1981 to 1983, and the United World College of the Atlantic in Wales, the UK (1983 to 1985), from which he received his International Baccalaureate.[5][9]
After his military service from 1985 to 1987, Willem-Alexander studied History at Leiden University from 1987 onwards and received his MA degree (doctorandus) in 1993.[10][11] His final thesis was on the Dutch response to France's decision under President Charles de Gaulle to leave the NATO's integrated command structure.[5]
Willem-Alexander speaks English, Spanish, French and German in addition to his native Dutch.[12]
Between secondary school and his university education, Willem-Alexander performed military service in the Royal Netherlands Navy from August 1985 until January 1987. He received his training at the Royal Netherlands Naval College and the frigates HNLMS Tromp and HNLMS Abraham Crijnssen, where he was an ensign. In 1988 he received additional training at the ship HNLMS Van Kinsbergen and became a lieutenant (junior grade) (wachtofficier).[13]
As a reservist for the Royal Netherlands Navy, Willem-Alexander was promoted to Lieutenant Commander in 1995, Commander in 1997, Captain at Sea in 2001, and Commodore in 2005. As a reservist for the Royal Netherlands Army, he was made a Major (Grenadiers' and Rifles Guard Regiment) in 1995, and was promoted to Lieutenant Colonel in 1997, Colonel in 2001, and Brigadier General in 2005. As a reservist for the Royal Netherlands Air Force, he was made Squadron Leader in 1995 and promoted to Air Commodore in 2005. As a reservist for the Royal Marechaussee, he was made Brigadier General in 2005.[9]
Before his investiture as king in 2013, Willem-Alexander was honorably discharged from the armed forces. The government declared that the head of state cannot be a serving member of the armed forces, since the government itself holds supreme command over the armed forces. As king, Willem-Alexander may choose to wear a military uniform with royal insignia, but not with his former rank insignia.[14]
Since 1985, when he became 18 years old, Willem-Alexander has been a member of the Council of State of the Netherlands. This is the highest council of the Dutch government and is chaired by the head of state (then Queen Beatrix).[15]
King Willem-Alexander is interested in water management and sports issues. He was an honorary member of the World Commission on Water for the 21st century and patron of the Global Water Partnership, a body established by the World Bank, the UN, and the Swedish Ministry of Development. He was appointed as the Chairperson of the United Nations Secretary General's Advisory Board on Water and Sanitation on 12 December 2006.[16]
On 10 October 2010, Willem-Alexander and Mxima went to the Netherlands Antilles' capital, Willemstad, to attend and represent his mother, the Queen, at the Antillean Dissolution ceremony.
He was a patron of the Dutch Olympic Games Committee until 1998 when he was made a member of the International Olympic Committee (IOC). After becoming King, he relinquished his membership and received the Gold Olympic Order at the 125th IOC Session.[17] To celebrate the 100th anniversary of the 1928 Summer Olympics held in Amsterdam, he has expressed support to bid for the 2028 Summer Olympics.[18]
He was a member of the supervisory board of De Nederlandsche Bank (the Dutch central bank), a member of the Advisory Council of ECP (the information society forum for government, business and civil society), patron of Veterans' Day and held several other patronages and posts.[19]
On 28 January 2013, Beatrix announced her intention of abdicating. On the morning of 30 April, Beatrix signed the instrument of abdication at the Moseszaal (Moses Hall) at the Royal Palace of Amsterdam. Later that afternoon, Willem-Alexander was inaugurated as king in front of the joint assembly of the States General in a ceremony held at the Nieuwe Kerk.
As king, Willem-Alexander has weekly meetings with the prime minister and speaks regularly with ministers and state secretaries. He also signs all new Acts of Parliament and royal decrees. He represents the kingdom at home and abroad. At the State Opening of Parliament, he delivers the Speech from the Throne, which announces the plans of the government for the parliamentary year. The Constitution requires that the king appoint, dismiss and swear in all government ministers and state secretaries. As king, he is also the president of the Council of State, an advisory body that reviews proposed legislation. In modern practice, the monarch seldom chairs council meetings.[20]
At his accession at age 46, he was Europe's youngest monarch. On the inauguration of Spain's Felipe VI on 19 June 2014 he became, and remains, Europe's second-youngest monarch. He is also the first male monarch of the Netherlands since the death of his great-great-grandfather William III in 1890. Willem-Alexander was one of four new monarchs to take the throne in 2013 along with Pope Francis, the Emir Tamim bin Hamad of Qatar, and King Philippe of Belgium.
Willem-Alexander is an avid pilot and has said that if he had not been a royal, he would have liked to be an airline pilot so he could fly big planes such as the Boeing 747.[21] During the reign of his mother, he regularly flew the Dutch royal airplane on trips.[22]
In May 2017, Willem-Alexander revealed that he had served as a co-pilot on KLM flights for 21 years, flying KLM Cityhopper's Fokker 70s twice a month, even after his ascension to the throne. With KLM's phased retirement of the Fokker 70, he began training to fly Boeing 737s. Willem-Alexander was rarely recognized while in the KLM uniform and wearing the KLM cap, though a few passengers had recognized his voice, even though he never gave his name and only welcomed passengers on behalf of the captain and crew.[23][21]
Using the name "W. A. van Buren", one of the least-known titles of the House of Orange-Nassau, he participated in the 1986 Frisian Elfstedentocht, a 200 kilometres (120?mi) long distance ice skating tour.[24] He ran the New York City Marathon under the same pseudonym in 1992.[25] Willem-Alexander completed both events.
On 2 February 2002, he married Mxima Zorreguieta Cerruti at the Nieuwe Kerk in Amsterdam. Mxima is an Argentine woman of Basque, Portuguese and Italian ancestry, who prior to their marriage worked as an investment banker in New York City. The marriage triggered significant controversy due to the role the bride's father, Jorge Zorreguieta, had in the Argentinian military dictatorship. The couple have three daughters:
In an attempt to strike a balance between privacy for the royal family and availability to the press, the Netherlands Government Information Service (RVD) instituted a media code on 21 June 2005 which essentially states that:[26]
During a ski vacation in Argentina, several photographs were taken of the prince and his family during the private part of their holiday, including one by Associated Press staff photographer Natacha Pisarenko, in spite of the media code, and after a photo opportunity had been provided earlier.[27] The Associated Press decided to publish some of the photos, which were subsequently republished by several Dutch media. Willem-Alexander and the RVD jointly filed suit against the Associated Press on 5 August 2009, and the trial started on 14 August at the district court in Amsterdam. On 28 August, the district court ruled in favour of the prince and RVD, citing that the couple has a right to privacy; that the pictures in question add nothing to any public debate; and that they are not of any particular value to society since they are not photographs of his family "at work". Associated Press was sentenced to stop further publication of the photographs, on pain of a ?1,000 fine per violation with a ?50,000 maximum.[28]
The royal family currently lives in Villa Eikenhorst on the De Horsten estate in Wassenaar. After the move of Princess Beatrix to the castle of Drakensteyn and a renovation, Willem-Alexander and his family moved to the palace of Huis ten Bosch in The Hague.[29]
Willem-Alexander has a villa near Kranidi, Greece. Former actor Sean Connery has his own house nearby.[30]
On 10 July 2008, the Prince of Orange and Princess Maxima announced that they had invested in a development project on the Mozambican peninsula of Machangulo.[31] The development project was aimed at building an ecologically responsible vacation resort, including a hotel and several luxury vacation houses for investors. The project was to invest heavily in the local economy of the peninsula (building schools and a local clinic) with an eye both towards responsible sustainability and maintaining a local staff.[32] After contacting Mozambican President Armando Guebuza to verify that the Mozambican government had no objections, the couple decided to invest in two villas.[33] In 2009, controversy erupted in parliament and the press about the project and the prince's involvement.[33] Politician Alexander Pechtold questioned the morality of building such a resort in a poor country like Mozambique. After public and parliamentary controversy the royal couple announced that they decided to sell the property in Machangulo once their house was completed.[34] In January 2012, it was confirmed that the villa had been sold.[35]
His style and title, as appearing in preambles, is: "Willem-Alexander, by the Grace of God, King of the Netherlands, Prince of Orange-Nassau, etc. etc. etc." The triple 'etc.' refers to the Dutch monarch's other titles.
Willem-Alexander is the first Dutch king since Willem III, who died in 1890. Willem-Alexander had earlier indicated that when he became king, he would take the name Willem IV,[36] but it was announced on 28 January 2013 that his regnal name would be Willem-Alexander.[37]
Through his father, a member of the House of Amsberg, he is descended from families of the lower German nobility, and through his mother, from several royal German/Dutch families such as the House of Lippe, Mecklenburg-Schwerin, the House of Orange-Nassau, Waldeck and Pyrmont, and the House of Hohenzollern. He is descended from the first King of the Netherlands, William I of the Netherlands, who was also a ruler in Luxembourg and several German states, and all subsequent Dutch monarchs. By his mother, Willem-Alexander also descended from Paul I of Russia and thus from German princess Catherine the Great. Through his father, he is also descended from several Dutch/Flemish families who left the Low Countries during Spanish rule, such as the Berenbergs. His paternal great-great-grandfather Gabriel von Amsberg (1822ÿ1895), a Major-General of Mecklenburg, was recognized as noble as late as 1891, the family having adopted the "von" in 1795.[52][53]
King Willem-Alexander is a descendant of King George II and more relevant for his succession rights of his granddaughter Princess Augusta of Great Britain. Under the British Act of Settlement, King Willem-Alexander temporarily forfeited his (distant) succession rights to the throne of the United Kingdom by marrying a Roman Catholic. This right has since been restored in 2015 under the Succession to the Crown Act 2013.[54]
He is King of the Netherlands because he represents the most direct descendant of the brother of William the Silent, the leader of the Dutch Independence movement, acceptable to the citizens of the Netherlands.
What is the highest peak of the himalayas?
Mount Everest🚨Overall, the Himalayan mountain system is the world's highest, and is home to 10 of 14 of the world's highest peaks, the Eight-thousanders, and a further 50 peaks over There over 50 Himalayan peaks with elevation over 7,000 metres (23,000?ft). The Karakoram and Hindu Kush are regarded as separate ranges. The rugged terrain makes few routes through the mountains possible.
275917N 865531E? / ?27.98806N 86.92528E? / 27.98806; 86.92528? (1. Mount Everest / Sagarmatha / Chomolungma (8848 m))
Some routes through the Himalaya include:
Who is the captain of the indian football team?
Sunil Chhetri🚨The India national football team represents India in international football and is controlled by the All India Football Federation. Under the global jurisdiction of FIFA and governed in Asia by the AFC, the team is also part of the South Asian Football Federation. The team, which was once considered one of the best teams in Asia, had its golden era during the 1950s and early 1960s. During this period, under the coaching of Syed Abdul Rahim, India won gold during the 1951 and 1962 Asian Games, while finishing fourth during the 1956 Summer Olympics.
India have never participated in the FIFA World Cup, though the team did qualify for the World Cup in 1950 after all the other nations in their qualification group withdrew. However, India themselves withdrew prior to the tournament beginning. The team has also appeared three times in the Asia's top football competition, the AFC Asian Cup. Their best result in the continental tournament occurred in 1964 when the team finished as runners-up. India also participate in the SAFF Championship, the top regional football competition in South Asia. They have won the tournament six times since the tournament began in 1993.
Despite India not reaching the same heights since their golden era, the team has seen a steady resurgence since the beginning of the twenty-first century. Besides the SAFF Championship triumphs, under the guidance of Bob Houghton, India won the restarted Nehru Cup in 2007 and 2009 while also managing to emerge victorious during the 2008 AFC Challenge Cup. The Challenge Cup victory allowed India to once again qualify for the Asian Cup for the first time in 27 years.
Football teams consisting of entirely Indian players started to tour Australia, Japan, Indonesia, and Thailand during the late 1930s.[7] After the success of several Indian football clubs abroad, the All India Football Federation (AIFF) was formed in 1937. The national team played their first match as an independent nation in 1948 in the first round of the 1948 Summer Olympics against France. Using mainly barefooted players, India were defeated 2ÿ1 in London.[7]
In 1950, India managed to qualify for the 1950 FIFA World Cup, which was scheduled to take place in Brazil.[8] This was due to all their opponents during qualifying withdrawing from the pre-tournament qualifiers.[8] However, India themselves withdrew from the World Cup before the tournament was to begin. The All India Football Federation gave various reasons for the team's withdrawal, including travel costs, lack of practice time, and valuing the Olympics more than the World Cup.[8]
Despite the reason given out from the AIFF, many historians and pundits believe India withdrew from the World Cup due to FIFA imposing a rule banning players from playing barefoot.[9][10] However, according to the then captain of India, Sailen Manna, the story of the team not being allowed to play due to wanting to play barefoot was not true and was just an excuse to cover up the real reasons the AIFF decided not to travel to Brazil.[8] Since then, India has not come close to qualifying for another World Cup.[11]
Despite not participating in the World Cup in 1950, the following years after, from 1951 to 1964, are usually considered to be the "golden era" of Indian football. India, coached by Hyderabad City Police head coach Syed Abdul Rahim, became one of the best teams in Asia.[12] In March 1951, Rahim lead India to their first ever triumph during the 1951 Asian Games. Hosted in India, the team defeated Iran 1ÿ0 in the gold medal match to gain their first trophy.[13] Sahu Mewalal scored the winning goal for India in that match.[13] The next year India went back to the Olympics but were once again defeated in the first round, this time by Yugoslavia and by a score of 10ÿ1.[14] Upon returning to India, the AIFF made it mandatory for footballers to wear boots.[7] After taking the defeat in Finland, India participated in various minor tournaments, such as the Colombo Cup, which they won three times from 1953 to 1955.[15]
In 1954, India returned to the Asian Games as defending champions in Manila. Despite their achievement three years prior, India were unable to go past the group stage as the team finished second in Group C during the tournament, two points behind Indonesia.[16] Two years later, during the 1956 Summer Olympics, India went on to achieve what is still considered the team's greatest result. The team finished in fourth place during the Summer Olympics football tournament, losing the bronze-medal match to Bulgaria 3ÿ0.[17] The tournament is also known for Neville D'Souza's hat-trick against Australia in the quarterfinals. D'Souza's hat-trick was the first hat-trick scored by an Asian in Olympic history.[17]
After their good performance during the Summer Olympics, India participated in the 1958 Asian Games in Tokyo. The team once again finished fourth, losing the bronze-medal match to Indonesia 4ÿ1.[18] The next year the team traveled to Malaysia where they took part in the Merdeka Cup and finished as the tournament runners-up.[19]
India began the 1960s with 1960 AFC Asian Cup qualifiers. Despite the qualifiers for the West Zone being held in Kochi, India finished last in their qualification group and thus failed to qualify for the tournament.[20] Despite the set-back, India went on to win the gold medal during the Asian Games for the second time in 1962. The team defeated South Korea 2ÿ1 to win their second major championship.[21]
Two years later, following their Asian Games triumph, India participated in the 1964 AFC Asian Cup after all the other teams in their qualification group withdrew. Despite their automatic entry into the continental tournament, India managed to finish as the runners-up during the tournament, losing out to the hosts, Israel, by two points. This remains India's best performance in the AFC Asian Cup.[22]
India returned to the Asian Games in 1966. Despite their performance two years prior during the AFC Asian Cup, India could not go beyond the group stage as the team finished third, behind Japan and Iran.[23] Four years later, during the 1970 Asian Games, India came back and took third place during the tournament. The team defeated Japan 1ÿ0 during the bronze-medal match.[24]
In 1974, India's performance in the Asian Games once again sharply declined as they finished the 1974 edition in last place in their group, losing all three matches, scoring two, and conceding 14 goals in the first round.[25] India then showed steady improvement during the 1978 tournament, finishing second in their group of three. The team were then knocked-out in the next round, finishing last in their group with three defeats from three matches.[26] The 1982 tournament proved to be better for India as the side managed to qualify for the quarter-finals before losing to Saudi Arabia 1ÿ0.[27]
In 1984, India managed to qualify for the AFC Asian Cup for the first time since their second place triumph in 1964. During the 1984 tournament, India finished in last place in their five team group in the first round.[28] India's only non-defeat during the tournament came against Iran, a 0ÿ0 draw.[28]
Despite India's decline from a major football power in Asia, the team still managed to assert its dominance as the top team in South Asia. India managed to win the football competition of the South Asian Games in 1985 and then again won the gold medal in 1987.[29] The team then began the 1990s by winning the inaugural SAFF Championship in 1993.[30] The team ended the 21st century by winning the SAFF Championship again in 1997 and 1999.[30]
India's first competitive matches of the 21st century were the 2002 FIFA World Cup first round qualifiers. Despite a very bright start, defeating the United Arab Emirates 1ÿ0, drawing Yemen 1ÿ1, as well as two victories over Brunei, including a 5ÿ0 victory in Bangalore, India finished a point away from qualification for the next round.[31] In 2003, India took part in the 2003 SAFF Championship. The team qualified for the semi-finals but fell to Bangladesh 2ÿ1.[32]
Later in 2003, India participated in the Afro-Asian Games being held in Hyderabad. Under the coaching of Stephen Constantine, India managed to make it to the final of the tournament after defeating Zimbabwe, a team ranked 85 places above India in the FIFA rankings at the time, 5ÿ3.[33] Despite the major victory, during the gold-medal match India were defeated 1ÿ0 by Uzbekistan U21.[34] Due to this achievement, Constantine was voted as the Asian Football Confederation's Manager of the Month for October 2003. The tournament result also gave India more recognition around the country and around the world.[33]
Constantine was replaced by Syed Nayeemuddin in 2005 but the Indian head coach only lasted for a little over a year as India suffered many heavy defeats during the 2007 AFC Asian Cup qualifiers.[35] During this time India were defeated 6ÿ0 by Japan, 3ÿ0 by Saudi Arabia and Yemen respectively at home, and 7ÿ1 away in Jeddah.[36] Former Malm? and China coach Bob Houghton was brought in as head coach in May 2006.[37]
Under Houghton, India witnessed massive improvement in their football standing. In August 2007, Houghton won the country the restarted Nehru Cup after India defeated Syria 1ÿ0 in the final.[38] Pappachen Pradeep scored the winning goal for India that match. The next year, Houghton lead India during the 2008 AFC Challenge Cup, which was hosted in Hyderabad and Delhi. During the tournament, India breezed through the group stage before defeating Myanmar in the semi-finals. In the final against Tajikistan, India, through a Sunil Chhetri hat-trick, won the match 4ÿ1. The victory not only earned India the championship but it also allowed India to qualify for the 2011 AFC Asian Cup, the nation's first Asian Cup appearance in 27 years.[39] In order to prepare for the Asian Cup, Houghton had the team stay together as a squad for eight months from June 2010 till the start of the tournament, meaning the players would not play for their clubs.[40]
India were drawn into Group C for the Asian Cup with Australia, South Korea, and Bahrain.[41] Despite staying together as a team for eight months, India lost all three of their matches during the Asian Cup, including a 4ÿ0 defeat to Australia.[42] Despite the results, India were still praised by fans and pundits for their valiant efforts during the tournament.[42]
After participating the 2011 AFC Asian Cup, India's quest to qualify for the 2015 edition of the tournament began in February 2011 with AFC Challenge Cup qualifiers. Bob Houghton decided to change the makeup of the India squad, replacing many of the aging players from the Asian Cup with some young players from the AIFF development side in the I-League, Indian Arrows.[43] Even with a young side, India managed to qualify for the AFC Challenge Cup with ease.[44] Despite the good result though with a young side, the AIFF decided to terminate the contract of Bob Houghton.[45]
After having Dempo coach, Armando Colaco, as interim head coach, the AIFF signed Savio Medeira as head coach in October 2011.[46] Despite leading India to another SAFF Championship victory, Medeira lead India to their worst performance in the AFC Challenge Cup in March 2012. The team lost all three of their group matches, unable to score a single goal during the tournament.[47] After the tournament, Medeira was replaced as head coach by Dutchman, Wim Koevermans.[48] Koeverman's first job as head coach was the 2012 Nehru Cup. India won their third successive Nehru Cup, defeating Cameroon's B side on penalties.[49]
In March 2013, India failed to qualify for the 2014 AFC Challenge Cup and thus also failed to qualify for the 2015 AFC Asian Cup.[50] The team also failed to retain the SAFF Championship, losing 2ÿ0 to Afghanistan in the 2013 final.[51] After more bad results in friendlies, Koevermans resigned as head coach in October 2014.[52]
By March 2015, after not playing any matches, India reached their lowest FIFA ranking position of 173.[53] A couple months prior, Stephen Constantine was re-hired as the head coach after first leading India more than a decade before.[54] Constantine's first major assignment back as the India head coach were the 2018 FIFA World Cup qualifiers. After making it through the first round of qualifiers, India crashed out during the second round, losing seven of their eight matches and thus, once again, failed to qualify for the World Cup.[55]
The following 34 players were called up prior to the 2017 Hero Tri-Nation Series matches, as well as the 2019 AFC Asian Cup qualifier against Macau.[56]
Caps and goals are updated as of 26 August 2017 after the match against Saint Kitts and Nevis.
The following players have also been called up to the India squad within the last twelve months.
For all past match results of the national team, see the team's results page.
India have never participated in a FIFA World Cup.[57] After gaining independence in 1947, India managed to qualify for the World Cup held in 1950. This was due to Myanmar, Indonesia, and the Philippines withdrawing from qualification.[57] However, prior to the start of the tournament, India themselves withdrew due to the expenses required in getting the team to Brazil.[57] Other reasons cited for why India withdrew include FIFA not allowing Indian players to play in the tournament barefoot and the All India Football Federation not considering the World Cup an important tournament compared to the Olympics.[57]
After withdrawing from the 1950 FIFA World Cup, India didn't enter the qualifying rounds of the tournament between 1954 and 1982.[58] Since the 1986 qualifiers, with the exception of the 1990 edition of the tournament, the team started to participate in qualifiers but have yet to qualify for the tournament again.[58]
India have qualified for the AFC Asian Cup three times. The team played their first Asian Cup in 1964. During this tournament India finished as the runners-up, their best major tournament performance yet.[59] Since then India has failed to progress beyond the first round of the Asian Cup with their most recent participation being the 2011 Asian Cup.
Since independence, there have been eighteen different head coaches for the India national team, with ten of them being foreign coaches. The most successful head coach for India was Syed Abdul Rahim, who lead India to gold in both the 1951 and 1962 Asian Games while also achieving a fourth-place finish during the 1956 Summer Olympics.[60] The most successful foreign head coach for India was Bob Houghton, who coached the side from 2006 to 2011.[61] With Houghton in charge, India won the Nehru Cup twice and the AFC Challenge Cup in 2008 which allowed India to participate in their first AFC Asian Cup for 27 years.[61]
Who is the longest running actor in emmerdale?
Chris Chittell🚨Emmerdale (known as Emmerdale Farm until 1989) is a long-running British soap opera set in Emmerdale (known as Beckindale until 1994), a fictional village in the Yorkshire Dales. Created by Kevin Laffan, Emmerdale Farm was first broadcast on 16 October 1972. Produced by ITV Yorkshire, it has been filmed at their Leeds studio since its inception. The programme has been broadcast in every ITV region.
The series originally appeared during the afternoon until 1978, when it was moved to an early-evening time slot in most regions; London and Anglia followed during the mid-1980s. Until December 1988, Emmerdale took seasonal breaks; since then, it has been broadcast year-round.
Episodes air on ITV weekday evenings at 19:00, with a second Thursday episode at 20:00. The programme began broadcasting in high definition on 10 October 2011. Emmerdale is the United Kingdom's second-longest-running television soap opera (after ITV's Coronation Street), and attracts an average of five to seven million viewers per episode.
October 2012 marked the 40th anniversary of the show. During that month, the show made a live episode to mark the anniversary.
The premise of Emmerdale Farm was similar to the BBC radio soap opera The Archers, focusing on a family, a farm and characters in a nearby village. The programme's farmyard filming was originally modelled on RT's The Riordans, an Irish soap opera which was broadcast from the mid-1960s to the end of the 1970s. The Riordans broke new ground for soap operas by being filmed largely outdoors (on a farm, owned on the programme by Tom and Mary Riordan) rather than in a studiothe usual practice of British and American soap operas. The programme pioneered farmyard location shooting, with farm animals and equipment. During the 1960s and 1970s, outdoor filming of television programmes with outdoor broadcast units (OBUs) was in its infancy due to higher costs and reliance on the weather. The Riordans' success demonstrated that a soap opera could be filmed largely outdoors, and Yorkshire Television sent people to its set in County Meath to see the programme's production firsthand.[3][4]
Emmerdale has had a large number of characters since it began, with its cast gradually expanding in size. The programme has also had changing residences and businesses for its characters, including a bed-and-breakfast and a factory.
The Miffield estate was the largest employer in the village of Beckindale, 39 miles (63?km) from Bradford and 52 miles (84?km) from Leeds. Lord Miffield leased Emmerdale Farm, on the edge of the village, to the Sugden family during the 1850s in gratitude after Josh Sugden sacrificed his life for the earl's son in the Crimean War. Josh's grandson Joseph married Margaret Oldroyd and their son, Jacob, was born in January 1916. During the 1930s, Jacob Sugden purchased Emmerdale Farm. In 1945 he married Annie Pearson, daughter of farm labourer Sam Pearson. Margaret Sugden died in 1963, and Joseph died the following year.
Jacob Sugden ran the farm into the ground, drinking away its profits. The badly-maintained farm's future looked bleak at his death on 10 October 1972. He was survived by his wife Annie, two sons and a daughter: Jack, the eldest; Peggy and Joe, the youngest of the three. These characters formed the basis of Emmerdale Farm.
Character types on Emmerdale have included "bad boys", such as Cain Dingle, Ross Barton, Carl King, Robert Sugden and Aaron Livesy; "bitches", such as Kim Tate, Charity Tate, Nicola King, Chrissie White, Kelly Windsor and Sadie King; "villains", such as Cameron Murray, Lachlan White, Steph Stokes, Rosemary King, Gordon Livesy and Sally Spode; caring characters, such as Laurel Potts, Emily Kirk, Lisa Dingle, Paddy Kirk and Ruby Haswell; sassy women, such as Chas Dingle, Val Pollard, Viv Hope, Rebecca White, Nicola King, Leyla Harding and Belle Dingle, and comedy characters such as Kerry Wyatt, Bernice Blackstock, David Metcalfe, Val Pollard, Seth Armstrong, Dan Spencer and Jimmy King. The show has had a number of matriarchs, including Diane Sugden, Viv Hope, Lisa Dingle, Annie Sugden and Moira Barton. Older characters in Emmerdale include Edna Birch, Betty Eagleton, Pearl Ladderbanks, Sandy Thomas, Seth Armstrong, Alan Turner, Sam Pearson, Lily Butterfield and Len Reynolds.
The first episode of Emmerdale Farm, aired on 16 October 1972, began with Jacob Sugden's funeral. Jacob upset the family when he left the farm to his eldest son, Jack, who left home at 18 in 1964 and had not returned. Jack appeared in the opening episode, avoiding the funeral and waiting for the Sugdens at Emmerdale Farm. Over the next few months Jack sold a share of the farm to Annie, Joe, Peggy and his grandfather, Sam Pearson. Emmerdale Farm Ltd was formed when Henry Wilks bought Sam's share of the estate. The first episode, along with the others, have been repeated and released on a variety of media.[5]
Characters introduced in the first episode were:
The show's focus, initially on the farm and the Sugden family, moved to the nearby village of Beckindale. Reflecting this change, on 14 November 1989 its title was changed to Emmerdale. Coinciding with the title change was the introduction of the Tate family. These changes and more exciting storylines and dramatic episodes, such as Pat Sugden's 1986 car crash and the 1988 Crossgill fire, gradually began to improve the soap's popularity under new executive producer Keith Richardson. Richardson produced the programme for 24 years, overseeing its transformation from a minor, daytime, rural drama into a major UK soap opera.[6] The Windsor family arrived in 1993.
By 1993 Emmerdale was beginning its third decade on the air. In December of that year, one particular episode emerged as a major turning point for the show's history. Meant to evoke memories of the Lockerbie Disaster whose fifth anniversary was just nine days prior, the Emmerdale episode written for 30 December attracted its highest-ever audience (over 18?million) by featuring a plane crashing into the village, killing four people.[7] According to Nick Smurthwaite, the episode brought forth not just ratings but "complaints from aghast viewers."[7] Nevertheless, the episode proved to be "brilliant television", as the highly-rated episode "allowed the writers to get rid of much dead wood, and reinvent the soap virtually from scratch", which included survivors changing the village name from Beckindale to Emmerdale.[7]
Emmerdale had dramatic storylines for the rest of the 1990s and new long-term characters, such as the Dingle family, were introduced. The Tates became the soap's leading family during the decade, overshadowing the Sugdens and remaining at Home Farm for 16 years. Family members left or died and the last, Zoe, left in 2005. The early and mid-2000s included episodes with a storm (a similar, less-major storyline 10 years after the plane crash), a bus crash, the Kings River explosion, Sarah Sugden's death in a barn fire and the Sugden house fire (set in 2007 by Victoria Sugden, who was seeking the truth about her mother's death). It also saw the introduction of major long-term characters, including the King family and Cain and Charity Dingle (who left before returning in 2009).[8]
In 2009 the longest-tenured character, Jack Sugden, was killed off after the death of actor Clive Hornby (who had played Jack since 1980). Jack's funeral featured the first on-screen appearance in 13 years of Annie Sugden (Sheila Mercier). Early that year, executive producer Keith Richardson was replaced by former series producer Steve November (later replaced by John Whiston). Gavin Blyth became the series producer, followed by Stuart Blackburn after his death.
Emmerdale celebrated its 40th anniversary on 16 October 2012. On 1 May 2012, it was announced that the show would have its first-ever live episode.[9] On 25 June 2012, it was announced that Tony Prescott, who directed the 50th anniversary live episode of Coronation Street in December 2010 would direct the episode.[10] On 23 July it was reported that an ITV2 backstage show, Emmerdale Uncovered: Live, would be broadcast after the live episode.[11] On 14 August, it was announced that the production team was building a new Woolpack set for the live episode. Although Emmerdale's village and interior sets are miles apart, its producers wanted The Woolpack to feature in the live episode.[12] On 31 August, it was announced that Emmerdale had created and filmed a live music festival with performances by Scouting for Girls and The Proclaimers.[13] On 6 September, it was confirmed that the One-hour live episode would include an unexpected death, two weddings and two births.[14]
Emmerdale Live aired on 17 October 2012, in the middle of the 40th anniversary week, with the death revealed to be Carl King's. The story of Carl's death took the show into 2013, when a new series producer replaced Blackburn (who became producer of Coronation Street).
At the beginning of August 2015, Emmerdale introduced a new storyline: "Summer Fate", with the tagline "The choices we make are the paths we take. Who will meet their summer fate?". A promo for the storyline was released on 13 July. A disaster storyline had been rumoured, confirmed by the promo. The disaster was identified on 1 August, two days before the disaster week began, as a helicopter crash. The crash was triggered by an argument between Chrissie and Robert Sugden; Chrissie set Robert's car ablaze, causing exploding gas canisters to collide with a helicopter. The helicopter crashed into the village hall during Debbie Dingle and Pete Barton's wedding reception. Regular characters Ruby Haswell and Val Pollard were killed in the aftermath of the crash. Although Ross Barton was apparently murdered by his brother Pete, it was learned three weeks later that he survived.
Emmerdale has featured a number of families, some defining an era of the show:
The Sugdens and their relatives, the Merricks and the Skilbecks, were at the centre of the show during the series' first two decades in the 1970s and 1980s (the Emmerdale Farm era). The Sugdens, owners of Emmerdale Farm, were its first family. Many of its members, and those of the Merrick and Skilbeck families, have left or been killed off since the mid-1990s.
December 1984 saw the arrival of Caroline Bates; her teenage children, Kathy and Nick, followed in late 1985. Caroline left the show in 1989, returning for guest appearances in 1991, 1993-1994 and 1996. Nick was written out of the show when he was sentenced to ten years in prison in 1997. Kathy and her niece, Alice, remained in the village until late 2001; by then, Kathy had outlived two husbands. Through her, the Bateses are related to two of Emmerdale's central families: the Sugdens (through Jackie Merrick) and the Tates (through Chris Tate).
Sugdens remaining in the village include Jack's widow, Diane; his three children, Andy, Robert and Victoria Sugden; Andy's children Sarah and Jack (the latter born on the show's 40th anniversary), and Robert's ex-wife Chrissie. Other families followed: the middle-class Windsors in 1993 (known as the Hope family after Viv's 2001 remarriage to Bob Hope) and the ne'er-do-well Dingles in 1994.
The Tate, Windsor-Hope and Dingle families predominated during the 1990s and 2000s. The era's storylines included the 1993 plane crash, the 1994 Home Farm siege, the 1998 post-office robbery, the 2000 bus crash, the 2003ÿ04 storm and the 2006 King show-home collapse. By the mid- to late-2000s, the last of the Tates (Zoe, daughter Jean and nephew Joseph) had emigrated to New Zealand. In 2009, Chris Tate's ex-wife Charity and their son Noah returned to the village. Members of the Windsor-Hope family left the village in early 2006, and Viv Hope was killed off in a village fire in February 2011 after nearly 18 years on the show. As of 2015 only Donna Windsor's daughter, April, and the Hope branch of the family (Bob and his children, Carly and twins Cathy and Heathcliff) remain.
The King family arrived in 2004 (as the Tates departed), but many members have been killed off. In 2013, most of the Dingles remained. Their circumstances had changed in their two decades in the village; Chas Dingle owned half of The Woolpack and Marlon was a chef. As of 2014, the Dingles, Bartons and Whites are the central families; the Bartons are a farming family, and the Whites currently own Home Farm.
Over the years, Emmerdale has highlighted a range of different social issues, including rape, cancer, miscarriage, dementia, homosexuality, arson, murder, HIV, sexual assault, post traumatic stress disorder, brain aneurysm, adultery, domestic violence, financial problems, embezzlement, sexual abuse, alcoholism, drug addiction, anorexia, teenage pregnancy, gambling addiction, bereavement, fraud, suicide, mesothelioma, schizophrenia, manslaughter, becoming a parent in later life, sudden infant death syndrome, self-harming, assisted suicide, epilepsy and premature births.
An average Emmerdale episode generally attracts 6ÿ8?million viewers, making it one of Britain's most popular television programmes. During the 1990s, the series had an average of 10ÿ11?million viewers per episode. On 30 December 1993, Emmerdale had its largest audience (18?million) when a plane crashed into the village. On 27 May 1997, 13?million viewers saw Frank Tate die of a heart attack after the return of wife Kim. On 20 October 1998, 12.5?million viewers saw The Woolpack explode after a fire.
The village storm on 1 January 2004 attracted 11.19?million viewers. 18 May 2004 episode in which Jack Sugden was shot by his adopted son, Andy, attracted 8.27?million viewers. On 17 March 2005, 9.39?million watched Shelly Williams fall from the Isle of Arran ferry. Zoe Tate left the show after 16 years on 22 September 2005 before 8.58 million viewers, marking her departure by blowing up Home Farm. On 13 July 2006, the Kings River house collapse was seen by 6.90?million viewers. Cain Dingle left on 21 September 2006, before an audience of 8.57?million viewers. On Christmas Day 2006, 7.69?million saw Tom King murdered on his wedding day. Billy Hopwood crashed his truck into a lake on 1 February 2007, attracting 8.15?million viewers. The end of the "Who Killed Tom King?" storyline on 17 May 2007, had an audience of 8.92 million.
On 14 January 2010, 9.96?million saw Mark Wylde shot dead by wife Natasha. Natasha's 27 October confession to daughter Maisie attracted an audience of nearly 8?million. On 13 January 2011, 9.15?million saw a fire kill Viv Hope and Terry Woods. The live 40th-anniversary episode on 17 October 2012, drew an audience of 8.83?million. On 16 October 2013, 8.37?million watched Cameron take the occupants of The Woolpack hostage and shoot Alicia. The next day, 9.28?million viewers saw Cameron Murray die.[15]
Location shooting was originally filmed in the village of Arncliffe in Littondale, a quiet valley in the Yorkshire Dales. The Falcon, the village hotel, served as the fictional Woolpack Inn. When the filming location became public it was moved to the village of Esholt in 1976, where it remained for 22 years.
Filming returned to Esholt for a one off episode in 2016 for the Ashley Thomas dementia special which aired December 2016. The location was used to represent Ashley's onset of dementia to the viewer.
The original Emmerdale Farm buildings are near the village of Leathley. Creskeld Hall, in Arthington, (Home Farm). The buildings are one of the few original filming locations used for the entire series, and have been involved in many storylines.
Construction of a purpose-built set began on the Harewood estate in 1996, and it has been used since 1997. The first scenes filmed on the set (the front of The Woolpack) were broadcast on 17 February 1998. The Harewood set is a replica of Esholt, with minor alterations.
The Harewood houses are timber-framed and stone-faced. The village is built on green-belt land, with its buildings classified as "temporary structures" which must be demolished within ten years unless new planning permission is received. There is no plan to demolish the set, and a new planning application has been drawn up. The set includes a church and churchyard, where the characters who have died on the series are buried.
Butlers Farm is Brookland Farm, a working farm in the nearby village of Eccup. Farmyard and building exteriors are filmed at Brookland, with interior house shots filmed in the studio.
Location filming is also done in the City of Leeds and other West Yorkshire locations; the fictional market town of Hotten is Otley, 10 miles North West of Leeds. Benton Park School in Rawdon and the primary school in Farnley were also used for filming. Interiors are primarily filmed at Yorkshire Television's Emmerdale Production Centre in Leeds, next to Yorkshire's Leeds Studios.[16] As of 28 March 2011, HD-capable studios in the ITV Studios building were used for most of the interior scenes.
Four farms have been featured on Emmerdale:
Emmerdale's first sponsor (from 14 December 1999 to 20 February 2002) was Daz detergent, followed by Heinz Tomato Ketchup and Heinz salad cream from May 2003 to May 2005. Reckitt Benckiser took over until 2009, advertising Calgon, Air Wick, Veet, and Lemsip. Tombola Bingo underwrote the show from November 2009 to March 2012, followed by Bet365 Bingo until March 2014. McCain Foods began a two-year, S8 million sponsorship on 7 April 2014.[17]
The thirteen actors who have appeared in the series for 20 years or more are listed in the table below. The longest-tenured actor is Chris Chittell who has played Eric Pollard for 31 years. The longest-tenured actress is Sheila Mercier, who played Annie Sugden for 22 years.
Emmerdale was first broadcast two afternoons a week in 1972, and it later moved to a 19:00 slot. The number of episodes has increased, to its current six half-hour episodes each week. Each episode is filmed two to four weeks before it is broadcast on ITV.
Emmerdale reaches viewers in the Republic of Ireland via UTV Ireland, which broadcasts the series simultaneously with ITV in the UK with a live feed from London. Breaking news on ITV would interrupt the broadcast. Emmerdale was broadcast during the day on RT One from 1972 to 2001 before it moved to TV3. RT were several months behind; for many years, they broadcast the show five days a week (instead of ITV's three days a week) and took a break during the summer. As the series began a five-night week, RT fell behind the ITV broadcasts; the gap between RT One's last episode and TV3's first episode was about three months.[18]
The series has appeared in Sweden as Hem till g?rden ("Home to the Farm") since the 1970s?ÿ originally on TV2 and since 1994 on TV4. Two episodes are broadcast weekdays at 11:35. Emmerdale is the most-watched daytime non-news programme in Sweden, attracting 150,000 to 200,000 viewers daily.[19] Episodes are repeated overnight on TV4 and in prime time on digital channel TV4 Guld.
The programme appears in Finland on MTV3 on primetime, 17:55ÿ18:25 and 18:25ÿ18:55 Monday to Friday, two episodes a day, with repeats of each episode the following weekday morning between 9:00 and 11:00. Episodes originally aired in the UK in June and July 2016 were broadcast in Finland in March 2017. Emmerdale attracts an average of 350,000 to 450,000 viewers per episode, and is the most watched non-Finnish every-weekday program in Finnish television.[20]
Emmerdale is broadcast in New Zealand weekdays on ONE, with an hour-long episode Monday to Thursday and a half-hour episode on Friday from 12:30 to 13:00. It is the second-most-watched daytime programme, after the news.[21] Episodes are broadcast a month behind ITV's.
Emmerdale was broadcast in Australia for the first time in July 2006, when UKTV began airing the 2006 series with episode 4288.[22][23] As of April 2016, UKTV episodes are from July 2014, twenty one months behind the UK airings.
What sports did jackie robinson play besides baseball?
football🚨