Tuesday, December 31, 2019

The 2007-2008 Financial Crisis Causes, Impacts and the...

THE 2007-2008 FINANCIAL CRISIS: CAUSES, IMPACTS AND THE NEED FOR NEW REGULATIONS The initial cause of the financial turbulence is attributed to the U.S. sub-prime residential mortgage market. The sustained rise in asset prices, particularly house prices, on the back of excessively accommodative monetary policy and lax lending standards during 2002-2006, increased innovation in the new financial instruments, unusual low interest rates resulted in a large rise in mortgage credit to households; particularly low credit quality households, the greed of investors’ for ever higher returns coupled with very minimal down payments, along with the dependence on major global rating agencies, allowed complex investments products to be sold to an†¦show more content†¦These corporations are known as the â€Å"monoline† insurers or â€Å"monoline† guarantors, and it became another casualty of the financial crisis. Globally, many financial institutions had purchased these new promising guaranteed of debts. But, every good item has a bad side, and sev eral of these factors started to emerge alongside one another. Insolvency on one of these institutions could threaten the solvency of many others. When the â€Å"monoline† insurers started to fall into insolvency problem, the market was illiquid. Suddenly, emerging financial institutions were short of cash, as well as become insolvent. Some of the affected are such Goldman Sachs, Merrill Lynch, and Bear Stearns. But, at the end of the day, the worst effected from this financial crisis were the mortgage borrowers. Most of these â€Å"monoline† insurers did not have adequate capital to fulfill their guarantee promises. Investors’ dependence lied mostly on the high ratings placed by major global rating agencies for these institutions put the investors in a position where they could experience enormous losses. In order to survive, many banks turned to sovereign wealth funds to obtain new capital. Bad news continued to pour in from all sides. In August 2007 that the financial market could not solve the subprime crisis on its own and the problems spread beyond the U.S borders. Lehman Brothers filed for bankruptcy, Bear Stearns was acquired by JP Morgan Chase,Show MoreRelatedFinancial Crisis : The Fiscal Crisis1355 Words   |  6 Pagessecond week of October in 2008 was the worst week for stock market during 75 years, Buckley (2011) state that the worst record was the Dow Jones Industrial Averages dropped 22.1%, but it fell 44.3% then. In general, a financial crisis is not an accident; it may take several years and has complex and interlaced causes (Claessens and Kodres, 2014). The 2007-08 global financial crisis is a typical case due to long-term non-intervention policy and loose regulation for financial market from government.Read MoreFactors That Affects The Global Economy And Its Impact On The Nigeria And Egypt Economy1568 Words   |  7 Pagesthat affects the global economy and their impact on the Nigeria and Egypt economy. In 2007, the fi nancial crisis, which later extended to the global financial crisis began in the United States of America. The origin and elements of the 2007/2012 global financial crisis have been widely discussed in the literature throughout the period. The aim of this work is to, however, try to point out the reasons and also the macroeconomic effects of the financial crisis in both countries economy and the resultingRead MoreThe Effect of Recent Financial Crisis and Regulatory Implications in Asia1532 Words   |  7 Pagesdue to lack of education. Now on the larger scale is the impact on the institutional level of the global financial crisis. The financial crisis began in United States in 2007 and spread to other countries. The crisis was triggered by a liquidity shortfall in United States banking system and resulted to the collapse of financial institutions. Asia is one major continent in which the recent financial crisis had spread. The impact of the crisis had far reaching effects on the economy of Asia than manyRead MoreCorporate Governance For A Future Economic Crisis Essay1288 Words   |  6 Pages-Can corporate governance prevent a future economic crisis. Ten years ago, corporate governance was still in the concept stage. There wasn’t a lot of information about corporate governance. After some big corporate failure that hit the economy, it has become obvious that corporate governance matters. After the collapse of Enron,Wordcom,..a new regulation was born (Sarbanes Oxley).The law was implement to increase transparency and to promote integrity and accuracy inside the companies. UnfortunatelyRead MoreEssay on Overview of the Recent Financial Crisis in the US1468 Words   |  6 Pages1. Introduction The financial crisis started in the USA because of subprime mortgage crisis in 2007. As a consequence of it, a credit crunch was originated and it quickly spread from the real state sector to other sectors, and furthermore, from USA to other countries. This caused a series of financial and economic crises like the collapse of housing markets in Europe, the global stock markets, global financial systems and markets, along with a lot of large banks and financial institutions, as (SunRead MoreRole Of Financial Sector On The United Kingdom1400 Words   |  6 Pagesinto trouble should be allowed to fail. Present The role of the financial services is vital to the United Kingdom, as it gives support to businesses and enables the economy to grow. The sector offers products and services which has a significant impact on the UK’s economy and forms one of the few world-class sectors. The significance of financial sector in an advanced economy implies that it is fundamental for financial development to provide support for a focused and strong money related frameworkRead MoreThe Effects of the Bursting of the US Housing Bubble on the European Union1340 Words   |  6 PagesStates housing bubble during the period of 2006-2007 had triggered the 2008 financial crisis which also spread to the European Union zone. Many major European banks, many of which had significant holdings in the American market, started to crumble, followed by bailout requests, initiating a subsequent crisis that led to the Eurozone crisis. The combination of government debt crisis, a banking crisis, and further worsen by a growth and competitiveness crisis had thrown what could probably the biggestRead MoreFinancial Crisis Impact On Institutions And Markets1196 Words   |  5 PagesFinancial Crisis Impact on Institutions and Markets The financial crisis, beginning in 2007, negatively impacted the stability of financial institutions and markets across the world. While there are many speculative causes of the financial crisis, dealings in subprime mortgages are considered the biggest culprit. As a result, those involved in subprime mortgages, such as lenders, investment banks, credit rating agencies and securities investors were among the first to feel the crisis’ ramificationsRead MoreIn What Ways Can Organized Labor Constrain the Strategic Choices of an International Business? How Can an International Business Limit These Constraints?8367 Words   |  34 PagesAssignment Cabinet in ACBT Reception. 3. This Cover Sheet does NOT need to be date stamped if you are placing in the Assignment Cabinet. STUDENT NUMBER: | NAME OF STUDENT: (PRINT CLEARLY)FAMILY NAME OTHER NAME(S) | COURSE: | NAME OF LECTURER:Mr SanjikaPerera | RECEIVED BY: | TITLE/TOPIC OF ASSIGNMENT:Group assignment – global financial crisis | â€Å"I certify that the attached assignment is my own work and that any materialRead MoreSubprime and Adjustable Mortgages1156 Words   |  5 Pagespay because they cannot afford to make the higher payment. Many buyers accepted those offers without doing enough research on these types of loans; therefore, we will see the causes and effects of subprime mortgage rates. First, the â€Å"subprime mortgage† started between the middle of the year 2005 and the end of the year 2007 driven by low-interest rates and â€Å"excess liquidity†. During this period, Mortgage brokers offered potential buyers the opportunity of obtaining their dream home. In those years

Sunday, December 22, 2019

The Five Pillars Of Transitional Justice Essay - 1176 Words

Patrick Mulvenna B00700426 Transition justice - Truth In this essay I will write about one of the Five Pillars of Transitional Justice. Transitional Justice is a set of judicial and non judicial procedures and laws that have been implemented by various countries in order to deal with legacies of injustices to people s human rights. The core value of transition justice is the belief of justice. In an era of transition transitional justice can provide an opportunity for society to address past atrocities and human rights abuse. The main objective of transition justice is to establish the rule of law. The five pillars Transitional Justice are institutional reform, truth, reparations, reconciliation, prosecution and amnesty. The Pillar I have chosen to write about is Truth, Something that is true or in accordance with fact or reality. The reason I have chosen Truth is because in my example I am going to focus on The Ballymurphy Massacre. I have chosen to write about this particular legacy is because I am from the Ballymurphy area and over the years I have watched the process of truth unveiling around this historical issue. it s application to mass atrocities in post- conflict situations, mainly through Truth Reconciliation Commissions (TRC) - Hayner 1994 With uncovering truth there is always going to be advantages and disadvantages. Some disadvantages are Victims may be afraid to come forward in accusing police or the military of murder in fear ofShow MoreRelatedEuropean Integration And The European Union3311 Words   |  14 Pagesdeveloped, it inevitably came across the issue of justice and home affairs (JHA) and to what extant it had dictation over its member states’ borders. As the European Community and later the European Union established and expanded its policies over issues such as asylum, immigration and police, judicial cooperation, many began to criticize the European Union saying it was relinquishing the sovereignty of its member states. As the mandate over justice and home affairs moved from the national level toRead MoreThe Rwanda Of The Rwandan Defence Force904 Words   |  4 Pagesvictims and bystanders with difficult answers on how to solve them. The post-genocide nation of Rwanda witnessed one of the most horrific events a nation could experience, genocide. However, it has rebuilt itself to become an example nation for transitional justice, political stability and economic development. Rwanda’s military capabilities within the institute of the Rwandan Defence Force (RDF) are well known for its organisation capability and effectiveness in compact. In a Gallup Polling 2013 GlobalRead MoreEssay about Analysis of Brazil, Directed by Terry Gilliam1637 Words   |  7 Pagesobvious change, but underneath the thick skin and amongst the complex systems, exists another layer of transitions. Ideas, rationales, ideologies and beliefs all dwell within this layer of each being. It could be said that a nation can also fit this transitional framework. A nation grows in both size (wealth, population, power), and in ideological maturity (emancipation of slaves, civil rights, women’s rights†¦etc). This constant evolution of ideas and size is the foundation of a successful government.Read MoreSocial And Legal Aspects Of The Church And The State4621 Words   |  19 Pagesthat have sin ce been created to take care of the immigration issue have over time constituted a complicated, cross-agency structure, which rotate around six main pillars: border enforcement, information and interoperability of data systems, visa controls and travel screening, the fork of immigration enforcement and the criminal justice system, and the detention and deportation of non-citizens (Meissner, Kerwin, Chishti Bergeron, 2013, Pp 1-2). The Catholic Church The Catholic Church is the singleRead MorePublic Administration Under the King Ashoka4145 Words   |  17 Pagesabroad. Ashoka built a number of rock edicts and pillars to spread the gospel of Buddhism. ADMINISTRATION UNDER ASHOKA Before the Kalinga war, the Mauryan administration under Ashoka was not different from that of his predecessors. Ashoka, like previous Mauryan kings, was at the head of the centralized administrative system. He was helped by a council of ministers that was in charge of different ministries like taxation, army, agriculture, justice, etc. The empire was divided into administrativeRead MoreFight Club Essay2874 Words   |  12 Pagesconsidered the family’s flag bearer and figure head, ever responsible for all aspects of behavior his household conducts. Historically, in Europe, between the collapse of the Roman Empire and 600 A.D; individuals and families â€Å"act as their own criminal justice systems.† (Lab et al. 4). Back in those days, enforcement of laws and delivery of punishment were in its infancy; and â€Å"societies rely solely on informal social control to influence people’s behavior.† (Lab et al. 3). Several variations of informalRead MoreCorruptio n Is Barrier to Development in Pakistan9592 Words   |  39 PagesPakistan and barrier to its development. 1. Factors encouraging corruption in Pakistan A. Poor government policies B. Arrival of foreign remittance C. Afghan War 2. A barrier to development in Pakistan A. A crime against prosperity B. A barrier to justice C. A barrier to democracy D. A crime against the environment E. A barrier to health F. A barrier to education 3. Anti-corruption agencies 4. Transparency International report about Pakistan 5. Corruption Perception Index 6. National Corruption PerceptionRead MoreDeveloping Countries in the World Trade in Agriculture: Bangladesh Perspective.19109 Words   |  77 Pagestariffs for a fixed quota of imports. In case of commodities for which the import level was negligible, a minimum level of access of three percent of domestic consumption during the base year was required to be made for the developing countries and five percent for the developed countries. Being an LDC, Bangladesh is not required to undertake any such commitment. However, Bangladesh will not be allowed to inc rease its bound tariff. Tariff bound for Bangladesh has been set at a uniform ceiling rateRead MoreAchievement of the Millennium Development Goals (Mdgs) for Education in Ethiopia14429 Words   |  58 Pages many children continue to remain behind normal schooling progress. According to the Ministry of Education the Current national figures indicate that nearly 45 percent of the children who enroll in grade one drop out school before completing grade five (MOE, 2005). There is also the problem of children of getting the chance to be in school at an appropriate age. For instance in 2004/05 the Ministry of Education showed that of the total children who were enrolled in grade one, only about 60 percentRead MoreNursing Essay41677 Words   |  167 Pagesaddress the increasing demand for safe, highquality, and effective health care services? In the near term, the new health care laws identify great challenges in the management of chronic conditions, primary care (including care coordin ation and transitional care), prevention and wellness, and the prevention of adverse events (such as hospital-acquired infections). The demand for better provision of mental health services, school health services, long-term care, and palliative care (including end-of-life

Saturday, December 14, 2019

A Man S Best Friend Free Essays

The people of this time adopted orphaned wolf pup and nursed them Shoemaker 2 with small children and babies. Once generations of these tamed wolves star Ted to breed more domesticated versions followed, eventually giving way to more dog like attire tutees. These dogs proved loyal companions in many ways: participating in hunting rituals, create nag strong companionship with its owner, and protecting the community it was raised in [parallel structure]. We will write a custom essay sample on A Man S Best Friend or any similar topic only for you Order Now As the population of humans started dispersing around the globe, many varieties of dog breeds followed suit. Depending on the area, many breeds were specialize De to perform a specific role. Breeds such as Beagles and Dachshunds were bred specifically f or hunting, with breeds such as Australian Sheppard and Border Collies for herding. Whatever the task, all dogs showed infinite amounts of companionship, which entitled them to the phrase e â€Å"man’s best friend. † Although today dogs are usually found in homes, they’ve never lost the e love and companionship they share for their owner, which is why fortresses percent of all households in the U. S. Own at least one. Dogs create an atmosphere of love, loyalty, and co impassion, earning their name, and the phrase in which we all know them by. The emotional attachment that dogs have to their owners IS unmatched by an y other animal. Dogs do not judge their owner, or hold a grudge. Canines, unlike the r species, forgive and forget quickly, which is evident by their ancestors. Wolf packs resolve disc agreements instantaneously, and move on. Since an owner is all that a dog has, they pick up on nonverbal clues which would be unattainable to a stranger, or even a family member. H unmans come In contact with so many people during the day that they’ve forgotten this useful tool, but their canine hasn’t. Each day when they get home the dog is waiting for them, as h app as the day before to greet them. Dogs help their owners unwind and relieve stress. This resolves issues and tension that humans build up in the struggle of everyday life. Playing and intent reacting with dogs Shoemaker 3 leads to increased levels of extinction, a stress relieving hormone, while decree sing cortical, a stress inducing hormone. â€Å"both pet ownership and social support are significant ant predictors of survival, independent of the effects of the other psychosocial factors and pay ecologic status† (Marathons), confirmed heart disease researchers whom researched this theory . Socializing with a go not only reduces the effects of aging, such as a heart attack or stroke, but also enhances the longevity of the owner. An Arizona school principle that has merged learning with canines evokes, â€Å"You can always talk to a dog, and the dogs not going to judge you† ( Marathons). This phrase is why many dog owners mourn so dramatically when they lose their c Annie counterpart, because they didn’t just lose a pet, but a best friend. The willingness of a dog t o just sit and listen, while the owner weeps, laughs, or yells, gives the owner a place to vent, just as a friend or family member. The canines compassion is the owners Achilles’ heel [allusion The life of a dog experiences many of the things its owner does as well, which creates a boo ND almost unbreakable by any occurrence. These experiences create a profound relation unships with the owner, as strong as a family member, because Of the sheer amount Of joy the dog brings into its handler. The dog loves them the same, even after getting laid off, being poor, or having a bad day [parallel structure]. Each day when the owner comes home, they know the e dog will be there, happy to have them in their life – and that’s why the dog is a man’s best friend. There’s also a scientific reason why dogs are known as man’s best friend. A H Nigerian team of scientists recently discovered that dogs are very similar to humans w hen it comes to vocal recognition. ‘†Crucially, dog brains respond to acoustic cues of emotion I n the same way as people’,† said Tail Antics, the head of the research team. ‘Our findings us gets that they also use similar brain mechanisms to process social information. This may us port the Shoemaker 4 successfulness of vocal communication between the species†Ã¢â‚¬Ëœ (Spencer). T his is why dogs not only know when we are mad, but also when we are depressed, sad, or disc Aragua. Dogs hear the tonal patterns in our voices and react to them. When a person is sad, dog s hear this and come to comfort their owner. Using the human and dog brain waves, researchers et Steed the subjects on various noises. These included various types of emotional noises; crying, pain full whimpering, playful laughing, and merciful yelling. The results concluded that human and t heir canine counterparts have similar voice areas in the brain. Another experiment by sic entities at Emory University in Georgia, show that dogs do not simply just want the food the owe nerds provide, they want the affection and praise. These scientists used Emir’s to inspect the brain of dogs during several tests. The scientists raised there hands to show the dogs they would b e receiving positive reinforcement (a treat). The researchers proved that a part of the brain called the caudate nucleus, responsible for positive emotion, was congruent in canines and humans. This strengthens the idea that dogs really do love their owner for more then just the extra incentive sees they provide, they want the same things that humans want, companionship and loyalty. The experiments prove not only that dogs show genuine love for their owners, but also that the musicianship between owner and dog are equivalent. The anatomical similarities in the brain NSA of dogs and humans show that the way humans feel about dogs are also reciprocated. Do g’s are truly man’s best friend and the science behind the phrase proves it. The lineage of the domesticated canines revolves around thousands of years of domesticating and specialized breeding; however the dog has proven a loyal c maroon throughout them all. Dogs are a common occurrence in everyday households, and every person that owns one knows the impact they have on the lives of the handlers they a effect. Dogs pour all Shoemaker 5 f their loyalty, compassion, and efforts into their owner, a trait the world woo old be lucky to attain. How to cite A Man S Best Friend, Papers

Friday, December 6, 2019

Pathophysiology of Alzheimer Disease-Free-Samples-Myassignment

Question: Discuss the Pathophysiology of Alzheimer's disease in relation to the presenting signs and symptoms. Discuss the pharmacological Management in relation to symptom relief and ongoing Management. Answer: Introduction As per the Alzheimer's Association (2013), AD is an irreversible, progressive and neurological disorder that slowly impairs the thinking ability and memory and eventually the capability to perform normal activities. The main characteristics of this disease are the aggregation of the Amyloid plaques and the neurofibrillary tangles in the brain. Another feature of this disease is the loss of communication between the neurons that transmits messages to the brain and different parts of the body. This report discusses about the Pathophysiology of the disease, based on the symptoms and case history of the patient, Antonio Renaldi, who is suffering from mild to severe short term memory loss and few such problems related to AD. The following report also aims at providing information about the different pharmacological treatments and the nonpharmacologicals treatment that are suitable for Antonio, as per his symptoms. Although the disease does not have any cure, researches are still going on how to decrease the death of the neuronal cells. Lastly an idea has been given regarding the proper management of the behavioral symptoms that can provide comfort to the AD patients, as it is seen that Antonio is also suffering from social widrawal and an idea has been given for making things easier for him. Discussion Alzheimers disease is progressive, irreversible and a neurological disorder that results to impaired memory, loss of thinking skills and results in a cognitive decline (Nelson et al, 2012). This disease normally appears in the mid- 60s and worsen if treatment is not done properly. It is a common type of dementia. Many researches have taken place regarding the Alzheimers disease, but are the main reasons behind the reason why they occur is largely unknown (Birren et al., 2013). Diagnosis of the disease According to the report the Patient has taken a CT of brain. Normally a CT of brain discloses the brain anatomy. A person having AD would show a loss in the brain mass, because the hippocampus of the brain would be atrophied. The mini mental assessment taken by the patient also shows a less score. TheMiniMental Assessment(MMSE) orFolstein testinvolves a 30-point questionnaire which is used to measure the cognitive impairment.It is normally used inmedicine to screendementia. It is used to measure the degree of the cognitive impairment and is an effective way of understanding a patient's response to the treatment. A score equal to or greater than 24 signifies better mental health. The mini-assessment score got by Antonio is quite low which clearly indicates impaired cognition. The following case study also says that the patient had taken a full blood test. A blood test could easily detect the AD before the symptoms could appear (Montine et al., 2012). A recentresearch done by the Unive rsity of Otago has discovered a marker in the blood that could help screen Alzheimersthrough a single blood test. Recent researches have shown that the microRNAs found in the blood and the brain can easily detect AD with almost 86 percent accuracy (Denk et al., 2015). Pathophysiology of Alzheimers disease As per the information provided in the case study we have come to know that Antonio Renaldi is suffering from mild to severe short term memory loss which have caused due to the impaired communication between the neurons or neuronal death. AD is a very common type of dementia. The AD specialists can recognize the histopathological features within the brain. One histopathological feature involves the presence of the neurofibrillary tangles and amyloid plaques (Jiang et al., 2013). Amyloid protein accumulates outside the neurons and a modified form of Tau protein aggregates inside the nerve cells. AD impairs the proper functioning of the synapses and neurons. Almost trillions of synapses allow nerve impulses to travel through the brain circuits. The accumulation of the proteins prevents the communications between neurons and eventually the cell dies. In AD the Tau tangles prevents the transfer of the nutrients in the neurons and ultimately causes cell death. In certain cases the brain can also show abnormal shrinkage due to the cell loss and debris due to the dying neurons. The damage occurs at first in the hippocampus portion of the brain, that stores the memory but finally the additional parts are also affected causing severe short term memory loss, as seen in the case of Antonio Renaldi. It has been reported that in some cases genetic mutations are also responsible for the cause of AD (Heppner, Ransohoff Becher, 2015). Mutations in the gene expressing amyloid precursor proteins and Presinilin 1 and Presinilin 2 poteins potentially cause AD. Individuals suffering from AD often have difficulties in finding words to communicate with the people. Hence they detach themselves from social gatherings and often suffer from depressions. Patients with AD often suffer from sleeping disorders, so they sometimes wake up early. Antonio Renaldi is seen to have developed all these symptoms, which indicates the occurrence of AD. There are three stages of AD. They are- preclinical AD, dementia due to AD and mid cognitive impairment due to AD. The pre clinical stage involves with mild modifications in the brain. The blood diagnosis shows the signs of these disease but more important symptoms like the memory loss doesnt appear at first. The presymptomatic stage may last for 20 years or more. Patients with mild cognitive impairment have very mild changes in the thinking capabilities that are normally recognizable to the patients family and friends, although this doesnt prevent the patient from doing every days activity. Treatments of AD Pharmacological treatment They are treatments where medications are administered to prevent the illness or treat its signs. The medications that are normally given cannot stop the death of the neurons though the delaying of the process can be done (Salomon et al., 2012). There are certain drugs that can temporarily increase certain substances known as neurotransmitters. Acetyl-choline neurotransmitters are essential for learning and processing memory. In patients with AD the neurotransmitter is decreased. This deficit along with other presynaptic cholinergic deficits, and decreased activity of acetyl cholinesterase, emphasizes the cholinergic hypothesis of Alzheimer's disease. This causes cognitive decline in AD patients and neuronal damage. Three cholinesterase are prescribed to treat mild to severe stages of Alzheimers. They are- Donepezil (Aricept) is prescribed for treating all the phases of Alzheimer's. Rivastigmine (Exelon) and Galantamine (Razadyne) is prescribed for treating mild Alzheimers. The sleep ing disorders that are found in AD patients like Antonio Renaldi, the following medicines like Tricyclic antidepressants, such as nortriptyline and trazodone can be prescribed. Sleeping pills like zolpidem, chloral hydrate and zaleplon, Antipsychotics like quetiapine and risperidone can be given to Antonio as he had been suffering from sleeping disorders. Classical antipsychotics like haloperidol (Doody et al., 2013). The risks of medicines that induce sleep for elderly people shall be kept in to mind. There are risks like falls and accidents. If these medicines are even used, they should not be continued after the establishment of a regular sleeping pattern. Non-drug methods should be tried before using medicines, as some medications might cause side effects. Non pharmacological therapies As per the informations given in the case study we have come to know that the person is suffering from social widrawal and depression. Therefore the health care providers or the family members should focus on the active management of the disease. Despite of the fact that AD does not have any proper treatment, studies have shown that an active management of the disease can improve the condition of the patients along the various stages of the disease. Active management involves 1) proper application of the treatment procedures 2) management of the condition that is already coexisting 3) Care and support provided by the health caregivers 3) Participation in many constructive, supportive activities (Burke et al., 2015). Non pharmacological therapies involve cognitive training and things like behavioral interventions. The non pharmacological courses often delay the brain ageing in the AD patients. The non pharmacological interventions are mainly useful to cope up with depressions, wandering, sleep disorders, aggression and agitation, that Antonio Renaldi is facing. There are researches that support the effectiveness of these non- pharmacological therapies (Horr, Messinger Pillai, 2015). This in some way would improve their quality of life, cognitive function, mood, behavior, performance of daily activities. Additional studies say that remaining socially and mentally active reduces the risk of AD and other dementia. Thus it can be said that proper management of this disease can make the quality of the life better and can extend the life span of the AD patients (Leon, Garci Marco?Contelles, 2013). Conclusion In the above case study, the patient has been suffering from short term memory loss, depression, sleeping disorder and loss of confidence, thus widrawal from the society. His age and the symptoms thus indicated dementia. The CT reports of the brain have shown the shrinkage in brain due to the loss of neuronal cells. The blood tests conducted also indicated towards the Alzheimers disease. Further histopathological analysis of his brain had also indicated the formation of the neurofibrillary tangles and amyloid plaques that prevents the flow of the nerve impulse through the neurons, ultimately resulting in the death of the neuronal cells. Although there are certain medications that could be used to reduce the problems faced by Antonio, like sleeping disorders, memory loss and more, but emphasis is also given on the non-drug treatment, as per the age of the patients. Henceforth it can be concluded that active management of the disease, like 1) suitable application of the treatment optio ns 2) management of the coexisting conditions 3) Care and support given by the health care provider, caregivers 3) Participation in many constructive, supportive activities could at least make their journey through the different stages of the disease, a bit easier and happier. References Alzheimer's Association. (2013). 2013 Alzheimer's disease facts and figures.Alzheimer's dementia,9(2), 208-245. Birren, J. E., Cohen, G. D., Sloane, R. B., Lebowitz, B. D., Deutchman, D. E., Wykle, M., Hooyman, N. R. (Eds.). (2013).Handbook of mental health and aging. Academic Press. Burke, A., Hall, G. R., Yaari, R., Fleisher, A., Dougherty, J., Young, J., ... Tariot, P. (2015).Pocket Reference to Alzheimer's Disease Management. Springer Healthcare Limited. Denk, J., Boelmans, K., Siegismund, C., Lassner, D., Arlt, S., Jahn, H. (2015). MicroRNA profiling of CSF reveals potential biomarkers to detect Alzheimers disease.PLoS One,10(5), e0126423. Doody, R. S., Raman, R., Farlow, M., Iwatsubo, T., Vellas, B., Joffe, S., ... Aisen, P. S. (2013). A phase 3 trial of semagacestat for treatment of Alzheimer's disease.New England Journal of Medicine,369(4), 341-350. Heppner, F. L., Ransohoff, R. M., Becher, B. (2015). Immune attack: the role of inflammation in Alzheimer disease.Nature Reviews. Neuroscience,16(6), 358. Horr, T., Messinger-Rapport, B., Pillai, J. A. (2015). Systematic review of strengths and limitations of randomized controlled trials for non-pharmacological interventions in mild cognitive impairment: focus on Alzheimer's disease.The journal of nutrition, health aging,19(2), 141. Jiang, T., Yu, J. T., Tian, Y., Tan, L. (2013). Epidemiology and etiology of Alzheimers disease: from genetic to non-genetic factors.Current Alzheimer Research,10(8), 852-867. Leon, R., Garcia, A. G., Marco?Contelles, J. (2013). Recent advances in the multitarget?directed ligands approach for the treatment of Alzheimer's disease.Medicinal research reviews,33(1), 139-189. Montine, T. J., Phelps, C. H., Beach, T. G., Bigio, E. H., Cairns, N. J., Dickson, D. W., ... Nelson, P. T. (2012). National Institute on AgingAlzheimers Association guidelines for the neuropathologic assessment of Alzheimers disease: a practical approach.Acta neuropathologica,123(1), 1-11. Nelson, P. T., Alafuzoff, I., Bigio, E. H., Bouras, C., Braak, H., Cairns, N. J., ... Duyckaerts, C. (2012). Correlation of Alzheimer disease neuropathologic changes with cognitive status: a review of the literature.Journal of Neuropathology Experimental Neurology,71(5), 362-381. Salomone, S., Caraci, F., Leggio, G. M., Fedotova, J., Drago, F. (2012). New pharmacological strategies for treatment of Alzheimer's disease: focus on disease modifying drugs.British journal of clinical pharmacology,73(4), 504-517.

Friday, November 29, 2019

Freedom of Speech free essay sample

It is a right that should be practiced everywhere in the world, yet some countries in the world don’t. People must live in fear and be afraid to say anything. Only close friends share their opinions because they know it won’t be overheard by someone who will report it to the authorities. They can’t criticize their government when they make important decisions. Not having freedom of expression is holding them back. People have to pretend that everything that is being done by the authorities is splendid. With technology being part of our daily life expressing yourself has become even easier and lets you be heard by so many people. Not everyone but most of the people use social media as a way to connect with friends and colleagues. Websites like Facebook and Twitter are like worldwide public forum where something you say can be heard by hundreds or thousands of people, especially when there’s some heated debated going on about some hot topic. We will write a custom essay sample on Freedom of Speech or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page Times like these are when you have to be careful what you say. Yes, it is a great right to express yourself freely but sometimes people forget that what they say can offend others and make them look bad in the long run. Alexandra Wallace is a perfect example. A UCLA student that last year posted a video on YouTube ranting about Asians being disrespectful at the library and using their cellphones. The video became famous within days and with it came the ridicule. Many people on YouTube made reply videos making fun of her. Found her Facebook and sent her threatening messages, emailing her and even calling her phone. A. Wallace withdrew from UCLA couple days later. This is a perfect example where you should think before you say something on social media. That brings up the idea about restricting the freedom of expression. What is the point of restricting it? It is going to be like we don’t have the freedom of expression then. If we do place restrictions, First Amendment should come with an asterisk after it and then you should go to the end of the page and find out what that asterisk means. You will end up reading tons of pages of â€Å"Terms and Conditions† like on some credit card application. Freedom of expression should not come with an asterisk after it; otherwise it should be renamed to something else and not include the word â€Å"freedom† in it. There is nothing â€Å"freedom† about it if there’s going to be restrictions on what and how you can express yourself. Instead, people should be reminded about something called common sense. There are things that are right to talk about and things that are frowned upon by the society. That is where there’s two big differences, when it is frowned upon to talk about something by government and when it is frowned upon by the society. When the government doesn’t want you to talk about something that is when you can face legal actions for what you say. When it is frowned upon by the society, you won’t face legal actions. Yes you may be made fun of but you won’t go to jail for it. You should use common sense and figure it out yourself if what you are about to say will turn out for good or bad. Freedom of Speech free essay sample Freedom of speech Freedom of speech  is the freedom to speak freely without  censorship. The synonymous term  freedom of expression  is sometimes used to indicate not only freedom of verbal speech but any act of seeking, receiving and imparting information or ideas, regardless of the medium used. In practice, the right to freedom of speech is not absolute in any country and the right is commonly subject to limitations, such as on hate speech.The right to freedom of speech is recognized as a  human right  under Article 19 of the  Universal Declaration of Human Rights  and recognized in  international human rights law  in the  International Covenant on Civil and Political Rights  (ICCPR). The ICCPR recognizes the right to freedom of speech as the right to hold opinions without interference. Everyone shall have the right to freedom of expression. Furthermore freedom of speech is recognized in European, inter-American and African regional human rights law. We will write a custom essay sample on Freedom of Speech or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page It is different from and not to be confused with the concept of  freedom of thought. The right to freedom of speech and expressionConcepts of freedom of speech can be found in early human rights documents  and the modern concept of freedom of speech emerged gradually during the  European Enlightenment(Voltaire). England’s  Bill of Rights 1689  granted freedom of speech in Parliament and the  Declaration of the Rights of Man and of the Citizen, adopted during the  French Revolution  in 1789, specifically affirmed freedom of speech as an inalienable right. The Declaration provides for freedom of expression in Article 11, which states that: The free communication of ideas and opinions is one of the most precious of the rights of man.Every citizen may, accordingly, speak, write, and print with freedom, but shall be responsible for such abuses of this freedom as shall be defined by law. Article 19 of the  Universal Declaration of Human Rights, adopted in 1948, s tates that: Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Today freedom of speech, or the freedom of expression, is recognized in international and regional  human rights law.The right is enshrined in Article 19 of the  International Covenant on Civil and Political Rights, Article 10 of the  European Convention on Human Rights, Article 13 of the  American Convention on Human Rights  and Article 9 of the  African Charter on Human and Peoples Rights. Based on  John Miltons arguments, freedom of speech is understood as a multi-faceted right that includes not only the right to express, or disseminate, information and ideas, but three further distinct aspects: * the right to seek information and ideas; * the right to receive information and ideas; the right to impart information and ideas. International, regional and national standards also recognize that freedom of speech, as the freedom of expression, includes any medium, be it orally, in written, in print, through the  Internet  or through art forms. This means that th e protection of freedom of speech as a right includes not only the content, but also the means of expression. Relationship to other rights The right to freedom of speech and expression is closely related to other rights, and may be limited when conflicting with other rights (seeLimitations on freedom of speech).The right to freedom of expression is also related to the  right to a fair trial  and court proceeding which may limit access to the search for information or determine the opportunity and means in which freedom of expression is manifested within court proceedings. As a general principle freedom of expression may not limit the  right to privacy, as well as the honor and reputation of others. However greater latitude is given when criticism of public figures is involved. The right to freedom of expression is particularly important for  media, which plays a special role as the bearer of the general right to freedom of expression for all.However,  freedom of the press  is not necessarily enabling freedom of speech. Judith Lichtenberg has outlined conditions in which freedom of the press may constrain freedom of speech, for example where the media suppresses information or stifles the diversity of voices inherent in freedom of speech. Lichtenberg argues that  freedom of the press  is simply a form of  property right  summed up by the principle no money, no voice. Origins and academic freedom Freedom of speech and expression has a long history that predates modern  international human rights instruments. It is thought that ancient  Athens’ democratic ideology  of free speech may have emerged in the late 6th or early 5th century BC. [12]  Two of the most cherished values of the  Roman Republic  were freedom of religion and freedom of speech. In  Islamic ethics, freedom of speech was first declared in the  Rashidun  period by the  caliph  Umar  in the 7th century AD. In the  Abbasid Caliphate  period, freedom of speech was also declared by al-Hashimi (a cousin of Caliph  al-Mamun) in a letter to one of the religious opponents he was attempting to  convert  through reason.According to George Makdisi and Hugh Goddard, the idea of  academic freedom in  universities  was modelled on Islamic custom as practiced in the medieval  Madrasah  system from the 9th century. Islamic influence was certainly discernible in the foundation of the first deliberately-planned university in Europe, the  University of Naples Federico II  founded by  Fred erick II, Holy Roman Emperor  in 1224. Starting in medieval times, Muslims began to refer to  Manichaeans,  apostates, pagans, heretics and those who antagonized Islam as zindiqs, the charge being punishable by death. As of the late 8th century the  Abbasid  caliphs  began to hunt down and exterminate freethinkers in large numbers, putting to death anyone on mere suspicion of being a  zindiq. Freedom of speech, dissent and truth Before the invention of the  printing press  a writing, once created, could only be physically multiplied by the highly laborious and error-prone process of manual copying out and an elaborate system of censorship and control over  scribes  existed. Printing allowed for multiple exact copies of a work, leading to a more rapid and widespread circulation of ideas and information (see  print culture).The origins of  copyright law  in most European countries lie in efforts by the  Roman Catholic Church  and governments to regulate and control the output of printers. In 1501  Pope Alexander VI  issued a Bill against the unlicensed printing of books and in 1559 the  Index Expurgatorius, or  List of Prohibited Books, was issued for the first time . The Index Expurgatorius is the most famous and long lasting example of bad books catalogues issued by the Roman Catholic Church, which assumed responsibility to control thoughts and opinions, and suppressed views that went against its doctrines.The Index Expurgatorius was administered by the  Roman Inquisition, but enforced by local government authorities, and went through 300 editions. Amongst others it banned or  censored  books written by  Rene Descartes,  Giordano Bruno,  Galileo Galilei,  David Hume,  John Locke,  Daniel Defoe,  Jean-Jacques Rousseau  and  Voltaire. While governments and church encouraged printing in many ways because it allowed for the dissemination of  Bibles  and government information, works of dissent and criticism could also circulate rapidly.As a consequence, governments established controls over printers across Europe, requiring them to have official licenses to trade and produce books. The notion that the expression of dissent or subversive views should be tolerated, not censured or punished by law, developed alongside the rise of  printing  and the  press. Areopagitica, published in 1644, was  John Miltons response to the  Parliament of Englandsre-introduction of government licensing of  printers, hence  publishers. [25]  Church authorities had previously ensured that Miltons  essay on the right to divorcewas refused a license for publication.In Areopagitica, published without a license,  Milton made an impassioned plea for freedom of expression and toleration of falsehood,  stating: Give me the liberty to know, to utter, and to argue freely according to conscience, above all liberties. Miltons defense of freedom of expression was grounded in a  Protestantworldview and he thought that the English people had the mission to work out the truth of the  Reformation, which would lead to the  enlightenment  of all people. But Milton also articulated the main strands of future discussions about freedom of expression.By defining the scope of freedom of expression and of harmful speech Milton argued against the principle of pre-censorship and in favor of tolerance for a wide range of views. As the menace of printing spread, governments established centralised control mechanism. The  French crown  repressed printing and the printer Etienne Dolet  was burned at the stake in 1546. In 1557 the  British Crownthought to stem the flow of seditious and heretical books by chartering theStationers Company. The ight to print was limited to the members of that guild, and thirty years later the  Star Chamber  was chartered to curtail the greate enormities and abuses of dyvers contentyous and disorderlye persons professinge the arte or mystere of pryntinge or selling of books. The right to print was restricted to two universities and to the 21 existing printers in the  city of London, which had 53printing presses. As the British crown took control of type founding in 1637 printers fled to the  Netherlands.Confrontation with authority made printers radical and rebellious, with 800 authors, printers and book dealers being incarcerated in the  Bastille  in Paris before it was stormed in 1789. succession of English thinkers developed the idea of a right to freedom of expression, starting with  John Milton  (1608–74), then  John Locke  (1632–1704) and culminating in  John Stuart Mill  (1806–1873). Locke established the  individual  as the unit of value and the bearer of rights to  life,  liberty,  property  and the  pursuit of happiness.It was the role of  Government  to protect these rights and this belief was first enshrined in the  US Constitution, with the  First Amendment  adding the guaran tee that Congress shall make no law.. . abridging the freedom of speech, or of the press. John Stuart Mill argued that human freedom is good and without it there can be no progress in science, law or politics, which according to Mill required free discussion of opinion. Mills  On Liberty, published in 1859 became a classic defence of the right to freedom of expression.Mill argued that  truth  drives out falsity, therefore the free expression of ideas, true or false, should not be feared. Truth is not stable or fixed, but evolves with time. Mill argued that much of what we once considered true has turned out false. Therefore views should not be prohibited for their apparent falsity. Mill also argued that free discussion is necessary to prevent the deep slumber of a decided opinion. Discussion would drive the onwards march of truth and by considering false views the basis of true views could be re-affirmed.In  Evelyn Beatrice Halls biography of  Voltaire, she coined the following phrase to illustrate Voltaires beliefs: I disapprove of what you say, but I will defend to the death your right to say it.   Halls quote is frequently cited to describe the principle of freedom of speech. In the 20th Century  Noam Chomsky  states that: If you believe in freedom of speech, you believe in freedom of speech for views you dont like. Stalin  and  Hitler, for example, were dictators in favor of freedom of speech for views they liked only. If youre in favor of freedom of speech, that means youre in favor of freedom of speech precisely for views you despise. Professor  Lee Bollinger  argues that the free speech principle involves a special act of carving out one area of social interaction for extraordinary self-restraint, the purpose of which is to develop and demonstrate a social capacity to control feelings evoked by a host of social encounters. Bollinger argues that  tolerance  is a desirable value, if not essential. However, critics argue that society should be concerned by those who directly deny or advocate, for example,  genocide.Democracy The notion of freedom of expression is intimately linked to political debate and the concept ofdemocracy. The norms on limiting freedom of expression mean that public debate may not be completely suppressed even in times of emergency. One of the most notable proponents of the link between freedom of speech and  democracy  is  Alexander Meiklejohn. He argues that the concept of democracy is that of self-government by the people. For such a system to work an informed electorate is necessary. In order to be appropriately knowledgeable, there must be no constraints on the free flow of information and ideas.According to Meiklejohn, democracy will not be true to its essential ideal if those in power are able to manipulate the electorate by withholding information and stifling criticism. Meiklejohn acknowledges that the desire to manipulate opinion can stem from the motive of seeking to benefit society. However, he argues, choosing manipulation negates, in its means, the democratic ideal. Eric Barendt  has called this defence of free speech on the grounds of democracy probably the most attractive and certainly the most fashionable free speech theory in modern Western democracies. Thomas I.Emerson expanded on this defence when he argued that freedom of speech helps to provide a balance between  stability  and  change. Freedom of speech acts as a safety valve to let off steam when people might otherwise be bent on  revolution. He argues that The principle of open discussion is a method of achieving a moral adaptable and at the same time more stable community, of maintaining the precarious balance between healthy cleavage and necessary consensus. Emerson furthermore maintains that Opposition serves a vital social function in offsetting or ameliorating (the) normal process of bureaucratic decay. Research undertaken by the  Worldwide Governance Indicators  project at the  World Bank, indicates that freedom of speech, and the process of accountability that follows it, have a significant impact in the quality of  governance  of a country. Voice and Accountability within a country, defined as the extent to which a countrys  citizens  are able to participate in selecting their  government, as well as freedom of expression,  freedom of association, and  free media is one of the six dimensions of governance that the Worldwide Governance Indicators measure for more than 200 countries. Social interaction and communityRichard Moon has developed the argument that the value of freedom of speech and freedom of expression lies with social interactions. Moon writes that by communicating an individual forms relationships and associations with others – family, friends, co-workers, church congregation, and countrymen. By entering into discussion with others an individual participates in the development of knowledge and in the direction of the community. Limitations According to the Freedom Forum Organization, legal systems, and society at large, recognize limits on the freedom of speech, particularly when freedom of speech conflicts with other values or rights.Limitations to freedom of speech may follow the harm principle or the offense principle, for example in the case of  pornography  or  hate speech. Limitations to freedom of speech may occur through legal sanction or social disapprobation, or both. In On Liberty (1859)  John Stuart Mill  argued that . .. there ought to exist the fullest liberty of professing and discussing, as a matter of ethical conviction, any doctrine, however immoral it may be considered.   Mill argues that the fullest liberty of expression is required to push arguments to their logical limits, rather than the limits of social embarrassment.However, Mill also introduced what is known as the harm principle, in placing the following limitation on free expression: the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. In 1985  Joel Feinberg  introduced what is known as the offence principle, arguing that Mills harm principle does not provide sufficient protection against the wrongful behaviours of others.Feinberg wrote It is always a good reason in support of a proposed criminal prohibition that it would probably be an effective way of preventing serious offense (as opposed to injury or harm) to persons other th an the actor, and that it is probably a necessary means to that end.   Hence Feinberg argues that the harm principle sets the bar too high and that some forms of expression can be legitimately prohibited by law because they are very offensive. But, as offending someone is less serious than harming someone, the penalties imposed should be higher for causing harm.In contrast Mill does not support legal penalties unless they are based on the harm principle. Because the degree to which people may take offense varies, or may be the result of unjustified prejudice, Feinberg suggests that a number of factors need to be taken into account when applying the offense principle, including: the extent, duration and social value of the speech, the ease with which it can be avoided, the motives of the speaker, the number of people offended, the intensity of the offense, and the general interest of the community at large.The Internet and Information Society Jo Glanville, editor of the  Index on Censorship, states that the Internet has been a revolution for  ce nsorship  as much as for free speech. International, national and regional standards recognise that freedom of speech, as one form of freedom of expression, applies to any medium, including the  Internet. [8]  The  Communications Decency Act  (CDA) of 1996 was the first major attempt by the  United States Congress  to regulate  pornographic  material on the  Internet. In 1997, in the landmark  cyberlaw  case of  Reno v. ACLU, the  U. S.Supreme Court  partially overturned the law. Judge  Stewart R. Dalzell, one of the three federal judges who in June 1996 declared parts of the CDA unconstitutional, in his opinion stated the following: The Internet is a far more speech-enhancing medium than print, the village green, or the mails. Because it would necessarily affect the Internet itself, the CDA would necessarily reduce the speech available for adults on the medium. This is a constitutionally intolerable result. Some of the dialogue on the Internet surely tests the limits of conventional discourse.Speech on the Internet can be unfiltered, unpolished, and unconventional, even emotionally charged, sexually explicit, and vulgar – in a word, indecent in many communities. But we should expect such speech to occur in a medium in which citizens from all walks of life have a voice. We should also protect the autonomy that such a medium confers to ordinary people as well as media magnates. [. . . ] My analysis does not deprive the Government of all means of protecting children from the dangers of Internet communication.

Monday, November 25, 2019

The Only Place

The Only Place The Only Place The Only Place By Mark Nichol In spoken English, even the most careful speaker may casually misplace the modifier only in a sentence, as in â€Å"The meeting will only be held if the ordinance passes.† The intended meaning here is that whether the ordinance passes determines whether the meeting is to be held. Literally, however, the sentence indicates that the holding is the only action that will pertain to the meeting if the ordinance passes. (The meeting will not be recorded or reported on, for example, but if the ordinance fails, perhaps these and other things will occur in relation to the meeting.) As the parenthesized sentence before this one demonstrates, the meaning is changed depending the positioning of only: In the original sentence, the meeting is not conditional on the passage of the ordinance, though that is exactly what the speaker or writer means. Therefore, to accurately convey the meaning of the sentence, the word only should appear immediately after the word, phrase, or clause it modifies. Because a speaker can provide stress to the sentence, a listener is unlikely to misunderstand it, but a writer has no such advantage. Ambiguity is not inevitable when misplacing the modifier in written expression, but the careful writer makes the effort to produce a sentence that is syntactically sound. Here is another sentence best expressed with only positioned correctly: â€Å"Are we certain that participants only reside in those neighborhoods?† The significant point appears to be that the participants reside in those places but do not have any other connection to them. The intended meaning, however, is that the â€Å"we† are uncertain whether the participants live only in the specified locations and not in other places. This meaning is best conveyed by writing, â€Å"Are we certain participants reside only in those neighborhoods?† Note the difference in meaning when inserting the modifier only in various positions in the sentence â€Å"Smith said he could only confirm that nine demonstrators were taken into custody.† 1. Only Smith said he could confirm that nine demonstrators were taken into custody. (No one else made the claim that Smith could confirm the information.) 2. Smith only said he could confirm that nine demonstrators were taken into custody. (Smith didn’t say anything else.) 3. Smith said only he could confirm that nine demonstrators were taken into custody. (According to Smith, he is the only one who could confirm the information.) 4. Smith said he only could confirm that nine demonstrators were taken into custody. (Smith could not do anything else but potentially confirm the information.) 5. Smith said he could only confirm that nine demonstrators were taken into custody. (Smith could confirm the information but do nothing else about or to it.) 6. Smith said he could confirm only that nine demonstrators were taken into custody. (Smith could confirm nothing else but the information he provided.) 7. Smith said he could confirm that only nine demonstrators were taken into custody. (Smith was able to specify that nine demonstrators, not another quantity, were arrested, with the implication that other demonstrators were not arrested.) 8. Smith said he could confirm that nine only demonstrators were taken into custody. (This sentence seems to indicate, albeit ungrammatically, that the people arrested were only demonstrators and had no other occupation or pastime.) 9. Smith said he could confirm that nine demonstrators only were taken into custody. (The number of demonstrators arrested was only nine; unlike version number 7, this sentence does not necessarily imply additional demonstrators.) 10. Smith said he could confirm that nine demonstrators were only taken into custody. (The demonstrators were taken into custody, but nothing else pertaining to custody occurred to them.) 11. Smith said he could confirm that nine demonstrators were taken only into custody. (The demonstrators were not taken anywhere except into custody.) 12. Smith said he could confirm that nine demonstrators were taken into only custody. (The demonstrators were taken into a type of custody known as â€Å"only custody†; like version number 8, this sentence is not grammatically correct.) 13. Smith said he could confirm that nine demonstrators were taken into custody only. (Nothing else was done to the demonstrators besides taking them into custody.) The usual placement error for this type of sentence is â€Å"Smith said he could only confirm that nine demonstrators were taken into custody,† where only erroneously precedes the pertinent verb, suggesting that confirmation is all that could be done. However, the point that the stated item of information, and only that item, could be confirmed is correctly stated thus: â€Å"Smith said he could confirm only that nine demonstrators were taken into custody.† (Therefore, version 6 above is the only version suitable to express the intended meaning.) Want to improve your English in five minutes a day? Get a subscription and start receiving our writing tips and exercises daily! Keep learning! Browse the Style category, check our popular posts, or choose a related post below:How Many Tenses in English?20 Words Meaning "Being or Existing in the Past"Adverbs and Hyphens

Thursday, November 21, 2019

Questions and Answers Assignment Example | Topics and Well Written Essays - 250 words

Questions and Answers - Assignment Example The misuse of hard liquor is itself a social harm, with additional economic costs as well as increased health consequences in the long-term (Mathurin & Deltenre, 2009). For parents sending their teenagers off to college, the survey results are but a sobering reminder that not all activities in college are caged in slide rules and long hours at the library. Noteworthy, such negative publicity informs public opinion, whether true or falls, on the type of policies operationalized in such institutions. The study deviates a little from the survey rankings and so seeks to determine whether there is any relationship/connection between a student’s gender and binge drinking in such institutions of higher learning. To get the answers sought after, the study will sample opinions of students on whether such negative partying has any relationship with one’s gender. The study will utilize a sample size of 100 students, with each selected using simple random sampling

Wednesday, November 20, 2019

Older Adulthood Essay Example | Topics and Well Written Essays - 250 words

Older Adulthood - Essay Example This means that some diseases and health defects associated with old age can be prevented or at least reduced if proper care is taken. I will advocate for routine medical checkup for me and my friends so that these many complications are avoided in future. For example, it has come to my attention that avoiding exposure to environmental toxins and other hazards and also getting faster access to healthcare can avoid these complications. For a successful transition into old age, I will from now henceforth advice those with old age to accept the change, be sociable and also employ spiritual faith. Older adults require religious support and spiritual nourishment to enable them cope as I have just realized. Another way of coping is by keeping old and retired people busy by volunteering. This enable them overcome depression associated with old age, widowhood or caring for a sick partner. Having family members visit the old is refreshing to them (Kail and Cavanaugh 519). Old people are sometimes neglected or abused. This is not good as they require as minimum stress as possible. I have a changed opinion about old age, and will from now on emphasize on love and care for the old people. From this class, I have learnt of better ways of caring for and helping the older people. I have also gained an insight that can enable me advice old people into successful transition to old age. Plus, it has changed my view on old age. Old is

Monday, November 18, 2019

Access and Availability of dental care to Paediatric special needs Research Proposal

Access and Availability of dental care to Paediatric special needs patients in Saudi Arabia - Research Proposal Example The general oral hygiene of these chidlren were also found to be very low (Al-Banyan, Echeverri, Narendran and Keene, 2000, p.43). This study had recommended an oral health program to be implemented through the educational institutions (Al-Banyan, Echeverri, Narendran and Keene, 2000, p.43). Further studies to identify the specific factors leading to the problem was also suggested by this study (Al-Banyan, Echeverri, Narendran and Keene, 2000, p.43). Al-Malik and Rehbini (2006) have also pointed to the high level of caries in children in Saudi Arabia as compared to children in other countries and called for immediate intervention through school intervention programs. Especially when it comes to special needs patients like children with autism, the negative behavior of such children towards treatment or any similar problems in case of other special needs groups, will limit their access to and availability of dental care and treatment (Murshid, 2005). It is this existing research highl ights along with the case studies displaying the high prevalence of oral problems among special needs children that was witnessed while working with them that prompted this researcher to explore further in this area of study. Though the research works mentioned above have exhibited that there is a high level of oral health problems in children with special needs in Saudi Arabia, none of them have thrown light upon to what extent this problems gets addressed. It is not revealed whether these children have access to and availability of appropriate and efficient dental care in the present context in Saudi Arabia. Only by assessing the lacuna that is there, further steps can be taken to address the oral health needs of these special needs children. Hence this researcher is of the belief that this study on the access and availability of dental care to special needs children in Saudi Arabia can add valuable

Saturday, November 16, 2019

Implementation of New Computer Network

Implementation of New Computer Network Here we are going to implement an new computer network for this company that 25 employees have been working in. Suppose you want to build a computer network, one that has potential to grow to global proportions to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design t integrate these building blocks into an effective communication service? Suppose you want to build a computer network, one that has the potential togrow to global proportions and to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design to integrate these building blocks into an effective communication service? Answering this question is the overriding goal of — to describe the available building materials and then to show how they can be used to construct a network from the ground up. Before we can understand how to design a computer network, we should first agree on exactly what a computer network is. At one time, the term network meant the set of serial lines used to attach dumb terminals to mainframe computers. To some, the term implies the voice telephone network. To others, the only interesting network is the cable network used to disseminate video signals. The main thing these networks have in common is that they are specialized to handle one particular kind of data (keystrokes, voice, or video) and they typically connect to special-purpose devices (terminals, hand receivers, and television sets). What distinguishes a computer network from these other types of networks? Probably the most important characteristic of a computer network is its generality. Computer networks are built primarily from general-purpose programmable hardware, and they are not optimized for a particular application like making phone calls or delivering television signals. Instead, they are able to carry many different types of data, and they support a wide, and ever-growing, range of applications. This chapter looks at some typical applications of computer networks and discusses the requirements that a network designer who wishes to support such applications must be aware of. Once we understand the requirements, how do we proceed? Fortunately, we will not be building the first network. Others, most notably the community of researchers responsible for the Internet, have gone before us. We will use the wealth of experience generated from the Internet to guide our design. This experience is embodied in a network architecture that identifies the available hardware and software components and shows how they can be arranged to form a complete network system. To start us on the road toward understanding how to build a network, this chapter does four things. First, it explores the requirements that different applications and different communities of people (such as network users and network operators) place on the network. Second, it introduces the idea of a network architecture, which lays the foundation for the rest of the book. Third, it introduces some of the key elements in the implementation of computer networks. Finally, it identifies the key metrics that are used to evaluate the performance of computer networks. 1.1 APPLICATIONS Most people know the Internet through its applications: the World Wide Web, email, streaming audio and video, chat rooms, and music (file) sharing. The Web, for example, presents an intuitively simple interface. Users view pages full of textual and graphical objects, click on objects that they want to learn more about, and a corresponding new page appears. Most people are also aware that just under the covers, each selectable object on a page is bound to an identifier for the next page to be viewed. This identifier, called a Uniform Resource Locator (URL), is used to provide a way of identifying all the possible pages that can be viewed from your web browser. For example, http://www.cs.princeton.edu/~llp/index.html is the URL for a page providing information about one of this books authors: the string http indicates that the HyperText Transfer Protocol (HTTP) should be used to download the page, www.cs.princeton.edu is the name of the machine that serves the page, and /~llp/index.html uniquely identifies Larrys home page at this site. What most Web users are not aware of, however, is that by clicking on just one such URL, as many as 17 messages may be exchanged over the Internet, and this assumes the page itself is small enough to fit in a single message. This number includes up to six messages to translate the server name (www.cs.princeton.edu) into its Internet address (128.112.136.35), three messages to set up a Transmission Control Protocol (TCP) connection between your browser and this server, four messages for your browser to send the HTTP get request and the server to respond with the requested page (and for each side to acknowledge receipt of that message), and four messages to tear down the TCP connection. Of course, this does not include the millions of messages exchanged by Internet nodes throughout the day, just to let each other know that they exist and are ready to serve web pages, translate names to addresses, and forward messages toward their ultim ate destination. Another widespread application of the Internet is the delivery of streaming audio and video. While an entire video file could first be fetched from a remote machine and then played on the local machine, similar to the process of downloading and displaying a web page, this would entail waiting for the last second of the video file to be delivered before starting to look at it. Streaming video implies that the sender and the receiver are, respectively, the source and the sink for the video stream. That is, the source generates a video stream (perhaps using a video capture card), sends it across the Internet in messages, and the sink displays the stream as it arrives. There are a variety of different classes of video applications. One class of video application is video-on-demand, which reads a pre-existing movie from disk and transmits it over the network. Another kind of application is videoconferencing, which is in some ways the more challenging (and, for networking people, interesting) case because it has very tight timing constraints. Just as when using the telephone, the interactions among the participants must be timely. When a person at one end gestures, then that action must be displayed at the other end as quickly as possible. Too much delay makes the system unusable. Contrast this with video-on-demand where, if it takes several seconds from the time the user starts the video until the first image is displayed, the service is still deemed satisfactory. Also, interactive video usually implies that video is flowing in both directions, while a video-on-demand application is most likely sending video in only one direction. One pioneering example of a videoconferencing tool, developed in the early and mid-1990s, is vic. shows the control panel for a vic session. vic is actually one of a suite of conferencing tools designed at Lawrence Berkeley Laboratory and UC Berkeley. The others include a whiteboard application (wb) that allows users to send sketches and slides to each other, a visual audio tool called vat, and a session directory (sdr) that is used to create and advertise videoconferences. All these tools run on Unix—hence their lowercase names—and are freely available on the Internet. Many similar tools are available for other operating systems. It is interesting to note that while video over the Internet is still considered to be in its relative infancy at the time of this writing (2006), that the tools to support video over IP have existed for well over a decade. Although they are just two examples, downloading pages from the Web and participating in a videoconference demonstrate the diversity of applications that can be built on top of the Internet, and hint at the complexity of the Internets design. Starting from the beginning, and addressing one problem at time, the rest of this book explains how to build a network that supports such a wide range of applications. Chapter 9 concludes the book by revisiting these two specific applications, as well as several others that have become popular on todays Internet. 1.2 REQUIREMENTS We have just established an ambitious goal for ourselves: to understand how to build a computer network from the ground up. Our approach to accomplishing this goal will be to start from first principles, and then ask the kinds of questions we would naturally ask if building an actual network. At each step, we will use todays protocols to illustrate various design choices available to us, but we will not accept these existing artifacts as gospel. Instead, we will be asking (and answering) the question of why networks are designed the way they are. While it is tempting to settle for just understanding the way its done today, it is important to recognize the underlying concepts because networks are constantly changing as the technology evolves and new applications are invented. It is our experience that once you understand the fundamental ideas, any new protocol that you are confronted with will be relatively easy to digest. The first step is to identify the set of constraints and requirements that influence network design. Before getting started, however, it is important to understand that the expectations you have of a network depend on your perspective: An application programmer would list the services that his application needs, for example, a guarantee that each message the application sends will be delivered without error within a certain amount of time. A network designer would list the properties of a cost-effective design, for example, that network resources are efficiently utilized and fairly allocated to different users. A network provider would list the characteristics of a system that is easy to administer and manage, for example, in which faults can be easily isolated and whereitiseasytoaccountfor usage. This section attempts to distill these different perspectives into a high-level introduction to the major considerations that drive network design, and in doing so, identifies the challenges addressed throughout the rest of this book. 1.2.1 Connectivity Starting with the obvious, a network must provide connectivity among a set of computers. Sometimes it is enough to build a limited network that connects only a few select machines. In fact, for reasons of privacy and security, many private (corporate) networks have the explicit goal of limiting the set of machines that are connected. In contrast, other networks (of which the Internet is the prime example) are designed to grow in a way that allows them the potential to connect all the computers in the world. A system that is designed to support growth to an arbitrarily large size is said to scale. Using the Internet as a model, this book addresses the challenge of scalability. Links, Nodes, and Clouds Network connectivity occurs at many different levels. At the lowest level, a network can consist of two or more computers directly connected by some physical medium, such as a coaxial cable or an optical fiber. We call such a physical medium a link,and we often refer to the computers it connects as nodes. (Sometimes a node is a more specialized piece of hardware rather than a computer, but we overlook that distinction for the purposes of this discussion.) As illustrated in, physical links are sometimes limited to a pair of nodes (such a link is said to be point-to-point), while in other cases, more than two nodes may share a single physical link (such a link is said to be multiple-access). Whether a given link supports point-to-point or multiple-access connectivity depends on how the node is attached to the link. It is also the case that multiple-access links are often limited in size, in terms of both the geographical distance they can cover and the number of nodes they can connect. If computer networks were limited to situations in which all nodes are directly connected to each other over a common physical medium, then networks would either be very limited in the number of computers they could connect, or the number of wires coming out of the back of each node would quickly become both unmanageable and very expensive. Fortunately, connectivity between two nodes does not necessarily imply a direct physical connection between them—indirect connectivity may be achieved among a set of cooperating nodes. Consider the following two examples of how a collection of computers can be indirectly connected. shows a set of nodes, each of which is attached to one or more point- to-point links. Those nodes that are attached to at least two links run software that forwards data received on one link out on another. If organized in a systematic way, these forwarding nodes form a switched network. There are numerous types of switched networks, of which the two most common are circuit-switched and packet-switched. The former is most notably employed by the telephone system, while the latter is used for the overwhelming majority of computer networks and will be the focus of this book. The important feature of packet-switched networks is that the nodes in such a network send discrete blocks of data to each other. Think of these blocks of data as corresponding to some piece of application data such as a file, a piece of email, or an image. We call each block of data either a packet or a message, and for now we use these terms interchangeably; we discuss the reason they are not always the same in Section 1.2.2. Packet-switched networks typically use a strategy called store-and-forward. As the name suggests, each node in a store-and-forward network first receives a complete packet over some link, stores the packet in its internal memory, and then forwards the complete packet to the next node. In contrast, a circuit-switched network first establishes a dedicated circuit across a sequence of links and then allows the source node to send a stream of bits across this circuit to a destination node. The major reason for using packet switching rather than circuit switching in a computer network is efficiency, discussed in the next subsection. The cloud in distinguishes between the nodes on the inside that implement the network (they are commonly called switches, and their primary function is to store and forward packets) and the nodes on the outside of the cloud that use the network (they are commonly called hosts, and they support users and run application programs). Also note that the cloud in is one of the most important icons of computer networking. In general, we use a cloud to denote any type of network, whether it is a single point-to-point link, a multiple-access link, or a switched network. Thus, whenever you see a cloud used in a figure, you can think of it as a placeholder for any of the networking technologies covered in this book. A second way in which a set of computers can be indirectly connected is shown in . In this situation, a set of independent networks (clouds) are interconnected to form an internetwork, or internet for short. We adopt the Internets convention of referring to a generic internetwork of networks as a lowercase i internet, and the currently operational TCP/IP Internet as the capital I Internet. A node that is connected to two or more networks is commonly called a router or gateway, and it plays much the same role as a switch—it forwards messages from one network to another. Note that an internet can itself be viewed as another kind of network, which means that an internet can be built from an interconnection of internets. Thus, we can recursively build arbitrarily large networks by interconnecting clouds to form larger clouds. Just because a set of hosts are directly or indirectly connected to each other does not mean that we have succeeded in providing host-to-host connectivity. The final requirement is that each node must be able to state which of the other nodes on the network it wants to communicate with. This is done by assigning an address to each node. An address is a byte string that identifies a node; that is, the network can use a nodes address to distinguish it from the other nodes connected to the network. When a source node wants the network to deliver a message to a certain destination node, it specifies the address of the destination node. If the sending and receiving nodes are not directly connected, then the switches and routers of the network use this address to decide how to forward the message toward the destination. The process of determining systematically how to forward messages toward the destination node based on its address is called routing. This brief introduction to addressing and routing has presumed that the source node wants to send a message to a single destination node (unicast). While this is the most common scenario, it is also possible that the source node might want to broadcast a message to all the nodes on the network. Or a source node might want to send a message to some subset of the other nodes, but not all of them, a situation called multicast. Thus, in addition to node-specific addresses, another requirement of a network is that it supports multicast and broadcast addresses. The main idea to take away from this discussion is that we can define a network recursively as consisting of two or more nodes connected by a physical link, or as two or more networks connected by a node. In other words, a network can be constructed from a nesting of networks, where at the bottom level, the network is implemented by some physical medium. One of the key challenges in providing network connectivity is to define an address for each node that is reachable on the network (including support for broadcast and multicast connectivity), and to be able to use this address to route messages toward the appropriate destination node(s). 1.2.2 Cost-Effective Resource Sharing As stated above, this book focuses on packet-switched networks. This section explains the key requirement of computer networks—efficiency—that leads us to packet switching as the strategy of choice. Given a collection of nodes indirectly connected by a nesting of networks, it is possible for any pair of hosts to send messages to each other across a sequence of links and nodes. Of course, we want to do more than support just one pair of communicating hosts—we want to provide all pairs of hosts with the ability to exchange messages. The question, then, is how do all the hosts that want to communicate share the network, especially if they want to use it at the same time? And, as if that problem isnt hard enough, how do several hosts share the same link when they all want to use it at the same time? To understand how hosts share a network, we need to introduce a fundamental concept, multiplexing, which means that a system resource is shared among multiple users. At an intuitive level, multiplexing can be explained by analogy to a timesharing computer system, where a single physical CPU is shared (multiplexed) among multiple jobs, each of which believes it has its own private processor. Similarly, data being sent by multiple users can be multiplexed over the physical links that make up a network. To see how this might work, consider the simple network illustrated in , where the three hosts on the left side of the network (senders S1S3) are sending data to the three hosts on the right (receivers R1R3) by sharing a switched network that contains only one physical link. (For simplicity, assume that host S1 is sending data to host R1, and so on.) In this situation, three flows of data—corresponding to the three pairs of hosts—are multiplexed onto a single physical link by switch 1 and then demultiplexed back into separate flows by switch 2. Note that we are being intentionally vague about exactly what a flow of data corresponds to. For the purposes of this discussion, assume that each host on the left has a large supply of data that it wants to send to its counterpart on the right. There are several different methods for multiplexing multiple flows onto one physical link. One common method is synchronous time-division multiplexing (STDM). The idea of STDM is to divide time into equal-sized quanta and, in a round-robin fashion, give each flow a chance to send its data over the physical link. In other words, during time quantum 1, data from S1 to R1 is transmitted; during time quantum 2, data from S2 to R2 is transmitted; in quantum 3, S3 sends data to R3. At this point, the first flow (S1 to R1) gets to go again, and the process repeats. Another method is frequency-division multiplexing (FDM). The idea of FDM is to transmit each flow over the physical link at a different frequency, much the same way that the signals for different TV stations are transmitted at a different frequency on a physical cable TV link. Although simple to understand, both STDM and FDM are limited in two ways. First, if one of the flows (host pairs) does not have any data to send, its share of the physical link—that is, its time quantum or its frequency—remains idle, even if one of the other flows has data to transmit. For example, S3 had to wait its turn behind S1 and S2 in the previous paragraph, even if S1 and S2 had nothing to send. For computer communication, the amount of time that a link is idle can be very large—for example, consider the amount of time you spend reading a web page (leaving the link idle) compared to the time you spend fetching the page. Second, both STDM and FDM are limited to situations in which the maximum number of flows is fixed and known ahead of time. It is not practical to resize the quantum or to add additional quanta in the case of STDM or to add new frequencies in the case of FDM. The form of multiplexing that we make most use of in this book is called statistical multiplexing. Although the name is not all that helpful for understanding the concept, statistical multiplexing is really quite simple, with two key ideas. First, it is like STDM in that the physical link is shared over time—first data from one flow is transmitted over the physical link, then data from another flow is transmitted, and so on. Unlike STDM, however, data is transmitted from each flow on demand rather than during a predetermined time slot. Thus, if only one flow has data to send, it gets to transmit that data without waiting for its quantum to come around and thus without having to watch the quanta assigned to the other flows go by unused. It is this avoidance of idle time that gives packet switching its efficiency. As defined so far, however, statistical multiplexing has no mechanism to ensure that all the flows eventually get their turn to transmit over the physical link. That is, once a flow begins sending data, we need some way to limit the transmission, so that the other flows can have a turn. To account for this need, statistical multiplexing defines an upper bound on the size of the block of data that each flow is permitted to transmit at a given time. This limited-size block of data is typically referred to as a packet, to distinguish it from the arbitrarily large message that an application program might want to transmit. Because a packet-switched network limits the maximum size of packets, a host may not be able to send a complete message in one packet. The source may need to fragment the message into several packets, with the receiver reassembling the packets back into the original message. In other words, each flow sends a sequence of packets over the physical link, with a decision made on a packet-by-packet basis as to which flows packet to send next. Notice that if only one flow has data to send, then it can send a sequence of packets back-to-back. However, should more than one of the flows have data to send, then their packets are interleaved on the link. depicts a switch multiplexing packets from multiple sources onto a single shared link. The decision as to which packet to send next on a shared link can be made in a number of different ways. For example, in a network consisting of switches interconnected by links such as the one in the decision would be made by the switch that transmits packets onto the shared link. (As we will see later, not all packet-switched networks actually involve switches, and they may use other mechanisms to determine whose packet goes onto the link next.) Each switch in a packet-switched network makes this decision independently, on a packet-by-packet basis. One of the issues that faces a network designer is how to make this decision in a fair manner. For example, a switch could be designed to service packets on a first-in-first-out (FIFO) basis. Another approach would be to transmit the packets from each of the different flows that are currently sending data through the switch in a round-robin manner. This might be done to ensure that certain flows receive a particular share of the links b andwidth, or that they never have their packets delayed in the switch for more than a certain length of time. A network that attempts to allocate bandwidth to particular flows is sometimes said to support quality of service (QoS), a topic that we return to in Chapter 6. Also, notice in that since the switch has to multiplex three incoming packet streams onto one outgoing link, it is possible that the switch will receive packets faster than the shared link can accommodate. In this case, the switch is forced to buffer these packets in its memory. Should a switch receive packets faster than it can send them for an extended period of time, then the switch will eventually run out of buffer space, and some packets will have to be dropped. When a switch is operating in this state, it is said to be congested. The bottom line is that statistical multiplexing defines a cost-effective way for multiple users (e.g., host-to-host flows of data) to share network resources (links and nodes) in a fine-grained manner. It defines the packet as the granularity with which the links of the network are allocated to different flows, with each switch able to schedule the use of the physical links it is connected to on a per-packet basis. Fairly allocating link capacity to different flows and dealing with congestion when it occurs are the key challenges of statistical multiplexing. 1.2.3 Support for Common Services While the previous section outlined the challenges involved in providing costeffective connectivity among a group of hosts, it is overly simplistic to view a computer network as simply delivering packets among a collection of computers. It is more accurate to think of a network as providing the means for a set of application processes that are distributed over those computers to communicate. In other words, the next requirement of a computer network is that the application programs running on the hosts connected to the network must be able to communicate in a meaningful way. When two application programs need to communicate with each other, there are a lot of complicated things that need to happen beyond simply sending a message from one host to another. One option would be for application designers to build all that complicated functionality into each application program. However, since many applications need common services, it is much more logical to implement those common services once and then to let the application designer build the application using those services. The challenge for a network designer is to identify the right set of common services. The goal is to hide the complexity of the network from the application without overly constraining the application designer. Intuitively, we view the network as providing logical channels over which application-level processes can communicate with each other; each channel provides the set of services required by that application. In other words, just as we use a cloud to abstractly represent connectivity among a set of computers, we now think of a channel as connecting one process to another. shows a pair of application-level processes communicating over a logical channel that is, in turn, implemented on top of a cloud that connects a set of hosts. We can think of the channel as being like a pipe connecting two applications, so that a sending application can put data in one end and expect that data to be delivered by the network to the application at the other end of the pipe. Thechallengeistorecognize what functionality the channels should provide to application programs. For example, does the application require a guarantee that messages sent over the channel are delivered, or is it acceptable if some messages fail to arrive? Is it necessary that messages arrive at the recipient process in the same order in which they are sent, or does the recipient not care about the order in which messages arrive? Does the network need to ensure that no third parties are able to eavesdrop on the channel, or is privacy not a concern? In general, a network provides a variety of different types of channels, with each application selecting the type that best meets its needs. The rest of this section illustrates the thinking involved in defining useful channels. Identifying Common Communication Patterns Designing abstract channels involves first understanding the communication needs of a representative collection of applications, then extracting their common communication requirements, and finally incorporating the functionality that meets these requirements in the network. One of the earliest applications supported on any networ Implementation of New Computer Network Implementation of New Computer Network Here we are going to implement an new computer network for this company that 25 employees have been working in. Suppose you want to build a computer network, one that has potential to grow to global proportions to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design t integrate these building blocks into an effective communication service? Suppose you want to build a computer network, one that has the potential togrow to global proportions and to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design to integrate these building blocks into an effective communication service? Answering this question is the overriding goal of — to describe the available building materials and then to show how they can be used to construct a network from the ground up. Before we can understand how to design a computer network, we should first agree on exactly what a computer network is. At one time, the term network meant the set of serial lines used to attach dumb terminals to mainframe computers. To some, the term implies the voice telephone network. To others, the only interesting network is the cable network used to disseminate video signals. The main thing these networks have in common is that they are specialized to handle one particular kind of data (keystrokes, voice, or video) and they typically connect to special-purpose devices (terminals, hand receivers, and television sets). What distinguishes a computer network from these other types of networks? Probably the most important characteristic of a computer network is its generality. Computer networks are built primarily from general-purpose programmable hardware, and they are not optimized for a particular application like making phone calls or delivering television signals. Instead, they are able to carry many different types of data, and they support a wide, and ever-growing, range of applications. This chapter looks at some typical applications of computer networks and discusses the requirements that a network designer who wishes to support such applications must be aware of. Once we understand the requirements, how do we proceed? Fortunately, we will not be building the first network. Others, most notably the community of researchers responsible for the Internet, have gone before us. We will use the wealth of experience generated from the Internet to guide our design. This experience is embodied in a network architecture that identifies the available hardware and software components and shows how they can be arranged to form a complete network system. To start us on the road toward understanding how to build a network, this chapter does four things. First, it explores the requirements that different applications and different communities of people (such as network users and network operators) place on the network. Second, it introduces the idea of a network architecture, which lays the foundation for the rest of the book. Third, it introduces some of the key elements in the implementation of computer networks. Finally, it identifies the key metrics that are used to evaluate the performance of computer networks. 1.1 APPLICATIONS Most people know the Internet through its applications: the World Wide Web, email, streaming audio and video, chat rooms, and music (file) sharing. The Web, for example, presents an intuitively simple interface. Users view pages full of textual and graphical objects, click on objects that they want to learn more about, and a corresponding new page appears. Most people are also aware that just under the covers, each selectable object on a page is bound to an identifier for the next page to be viewed. This identifier, called a Uniform Resource Locator (URL), is used to provide a way of identifying all the possible pages that can be viewed from your web browser. For example, http://www.cs.princeton.edu/~llp/index.html is the URL for a page providing information about one of this books authors: the string http indicates that the HyperText Transfer Protocol (HTTP) should be used to download the page, www.cs.princeton.edu is the name of the machine that serves the page, and /~llp/index.html uniquely identifies Larrys home page at this site. What most Web users are not aware of, however, is that by clicking on just one such URL, as many as 17 messages may be exchanged over the Internet, and this assumes the page itself is small enough to fit in a single message. This number includes up to six messages to translate the server name (www.cs.princeton.edu) into its Internet address (128.112.136.35), three messages to set up a Transmission Control Protocol (TCP) connection between your browser and this server, four messages for your browser to send the HTTP get request and the server to respond with the requested page (and for each side to acknowledge receipt of that message), and four messages to tear down the TCP connection. Of course, this does not include the millions of messages exchanged by Internet nodes throughout the day, just to let each other know that they exist and are ready to serve web pages, translate names to addresses, and forward messages toward their ultim ate destination. Another widespread application of the Internet is the delivery of streaming audio and video. While an entire video file could first be fetched from a remote machine and then played on the local machine, similar to the process of downloading and displaying a web page, this would entail waiting for the last second of the video file to be delivered before starting to look at it. Streaming video implies that the sender and the receiver are, respectively, the source and the sink for the video stream. That is, the source generates a video stream (perhaps using a video capture card), sends it across the Internet in messages, and the sink displays the stream as it arrives. There are a variety of different classes of video applications. One class of video application is video-on-demand, which reads a pre-existing movie from disk and transmits it over the network. Another kind of application is videoconferencing, which is in some ways the more challenging (and, for networking people, interesting) case because it has very tight timing constraints. Just as when using the telephone, the interactions among the participants must be timely. When a person at one end gestures, then that action must be displayed at the other end as quickly as possible. Too much delay makes the system unusable. Contrast this with video-on-demand where, if it takes several seconds from the time the user starts the video until the first image is displayed, the service is still deemed satisfactory. Also, interactive video usually implies that video is flowing in both directions, while a video-on-demand application is most likely sending video in only one direction. One pioneering example of a videoconferencing tool, developed in the early and mid-1990s, is vic. shows the control panel for a vic session. vic is actually one of a suite of conferencing tools designed at Lawrence Berkeley Laboratory and UC Berkeley. The others include a whiteboard application (wb) that allows users to send sketches and slides to each other, a visual audio tool called vat, and a session directory (sdr) that is used to create and advertise videoconferences. All these tools run on Unix—hence their lowercase names—and are freely available on the Internet. Many similar tools are available for other operating systems. It is interesting to note that while video over the Internet is still considered to be in its relative infancy at the time of this writing (2006), that the tools to support video over IP have existed for well over a decade. Although they are just two examples, downloading pages from the Web and participating in a videoconference demonstrate the diversity of applications that can be built on top of the Internet, and hint at the complexity of the Internets design. Starting from the beginning, and addressing one problem at time, the rest of this book explains how to build a network that supports such a wide range of applications. Chapter 9 concludes the book by revisiting these two specific applications, as well as several others that have become popular on todays Internet. 1.2 REQUIREMENTS We have just established an ambitious goal for ourselves: to understand how to build a computer network from the ground up. Our approach to accomplishing this goal will be to start from first principles, and then ask the kinds of questions we would naturally ask if building an actual network. At each step, we will use todays protocols to illustrate various design choices available to us, but we will not accept these existing artifacts as gospel. Instead, we will be asking (and answering) the question of why networks are designed the way they are. While it is tempting to settle for just understanding the way its done today, it is important to recognize the underlying concepts because networks are constantly changing as the technology evolves and new applications are invented. It is our experience that once you understand the fundamental ideas, any new protocol that you are confronted with will be relatively easy to digest. The first step is to identify the set of constraints and requirements that influence network design. Before getting started, however, it is important to understand that the expectations you have of a network depend on your perspective: An application programmer would list the services that his application needs, for example, a guarantee that each message the application sends will be delivered without error within a certain amount of time. A network designer would list the properties of a cost-effective design, for example, that network resources are efficiently utilized and fairly allocated to different users. A network provider would list the characteristics of a system that is easy to administer and manage, for example, in which faults can be easily isolated and whereitiseasytoaccountfor usage. This section attempts to distill these different perspectives into a high-level introduction to the major considerations that drive network design, and in doing so, identifies the challenges addressed throughout the rest of this book. 1.2.1 Connectivity Starting with the obvious, a network must provide connectivity among a set of computers. Sometimes it is enough to build a limited network that connects only a few select machines. In fact, for reasons of privacy and security, many private (corporate) networks have the explicit goal of limiting the set of machines that are connected. In contrast, other networks (of which the Internet is the prime example) are designed to grow in a way that allows them the potential to connect all the computers in the world. A system that is designed to support growth to an arbitrarily large size is said to scale. Using the Internet as a model, this book addresses the challenge of scalability. Links, Nodes, and Clouds Network connectivity occurs at many different levels. At the lowest level, a network can consist of two or more computers directly connected by some physical medium, such as a coaxial cable or an optical fiber. We call such a physical medium a link,and we often refer to the computers it connects as nodes. (Sometimes a node is a more specialized piece of hardware rather than a computer, but we overlook that distinction for the purposes of this discussion.) As illustrated in, physical links are sometimes limited to a pair of nodes (such a link is said to be point-to-point), while in other cases, more than two nodes may share a single physical link (such a link is said to be multiple-access). Whether a given link supports point-to-point or multiple-access connectivity depends on how the node is attached to the link. It is also the case that multiple-access links are often limited in size, in terms of both the geographical distance they can cover and the number of nodes they can connect. If computer networks were limited to situations in which all nodes are directly connected to each other over a common physical medium, then networks would either be very limited in the number of computers they could connect, or the number of wires coming out of the back of each node would quickly become both unmanageable and very expensive. Fortunately, connectivity between two nodes does not necessarily imply a direct physical connection between them—indirect connectivity may be achieved among a set of cooperating nodes. Consider the following two examples of how a collection of computers can be indirectly connected. shows a set of nodes, each of which is attached to one or more point- to-point links. Those nodes that are attached to at least two links run software that forwards data received on one link out on another. If organized in a systematic way, these forwarding nodes form a switched network. There are numerous types of switched networks, of which the two most common are circuit-switched and packet-switched. The former is most notably employed by the telephone system, while the latter is used for the overwhelming majority of computer networks and will be the focus of this book. The important feature of packet-switched networks is that the nodes in such a network send discrete blocks of data to each other. Think of these blocks of data as corresponding to some piece of application data such as a file, a piece of email, or an image. We call each block of data either a packet or a message, and for now we use these terms interchangeably; we discuss the reason they are not always the same in Section 1.2.2. Packet-switched networks typically use a strategy called store-and-forward. As the name suggests, each node in a store-and-forward network first receives a complete packet over some link, stores the packet in its internal memory, and then forwards the complete packet to the next node. In contrast, a circuit-switched network first establishes a dedicated circuit across a sequence of links and then allows the source node to send a stream of bits across this circuit to a destination node. The major reason for using packet switching rather than circuit switching in a computer network is efficiency, discussed in the next subsection. The cloud in distinguishes between the nodes on the inside that implement the network (they are commonly called switches, and their primary function is to store and forward packets) and the nodes on the outside of the cloud that use the network (they are commonly called hosts, and they support users and run application programs). Also note that the cloud in is one of the most important icons of computer networking. In general, we use a cloud to denote any type of network, whether it is a single point-to-point link, a multiple-access link, or a switched network. Thus, whenever you see a cloud used in a figure, you can think of it as a placeholder for any of the networking technologies covered in this book. A second way in which a set of computers can be indirectly connected is shown in . In this situation, a set of independent networks (clouds) are interconnected to form an internetwork, or internet for short. We adopt the Internets convention of referring to a generic internetwork of networks as a lowercase i internet, and the currently operational TCP/IP Internet as the capital I Internet. A node that is connected to two or more networks is commonly called a router or gateway, and it plays much the same role as a switch—it forwards messages from one network to another. Note that an internet can itself be viewed as another kind of network, which means that an internet can be built from an interconnection of internets. Thus, we can recursively build arbitrarily large networks by interconnecting clouds to form larger clouds. Just because a set of hosts are directly or indirectly connected to each other does not mean that we have succeeded in providing host-to-host connectivity. The final requirement is that each node must be able to state which of the other nodes on the network it wants to communicate with. This is done by assigning an address to each node. An address is a byte string that identifies a node; that is, the network can use a nodes address to distinguish it from the other nodes connected to the network. When a source node wants the network to deliver a message to a certain destination node, it specifies the address of the destination node. If the sending and receiving nodes are not directly connected, then the switches and routers of the network use this address to decide how to forward the message toward the destination. The process of determining systematically how to forward messages toward the destination node based on its address is called routing. This brief introduction to addressing and routing has presumed that the source node wants to send a message to a single destination node (unicast). While this is the most common scenario, it is also possible that the source node might want to broadcast a message to all the nodes on the network. Or a source node might want to send a message to some subset of the other nodes, but not all of them, a situation called multicast. Thus, in addition to node-specific addresses, another requirement of a network is that it supports multicast and broadcast addresses. The main idea to take away from this discussion is that we can define a network recursively as consisting of two or more nodes connected by a physical link, or as two or more networks connected by a node. In other words, a network can be constructed from a nesting of networks, where at the bottom level, the network is implemented by some physical medium. One of the key challenges in providing network connectivity is to define an address for each node that is reachable on the network (including support for broadcast and multicast connectivity), and to be able to use this address to route messages toward the appropriate destination node(s). 1.2.2 Cost-Effective Resource Sharing As stated above, this book focuses on packet-switched networks. This section explains the key requirement of computer networks—efficiency—that leads us to packet switching as the strategy of choice. Given a collection of nodes indirectly connected by a nesting of networks, it is possible for any pair of hosts to send messages to each other across a sequence of links and nodes. Of course, we want to do more than support just one pair of communicating hosts—we want to provide all pairs of hosts with the ability to exchange messages. The question, then, is how do all the hosts that want to communicate share the network, especially if they want to use it at the same time? And, as if that problem isnt hard enough, how do several hosts share the same link when they all want to use it at the same time? To understand how hosts share a network, we need to introduce a fundamental concept, multiplexing, which means that a system resource is shared among multiple users. At an intuitive level, multiplexing can be explained by analogy to a timesharing computer system, where a single physical CPU is shared (multiplexed) among multiple jobs, each of which believes it has its own private processor. Similarly, data being sent by multiple users can be multiplexed over the physical links that make up a network. To see how this might work, consider the simple network illustrated in , where the three hosts on the left side of the network (senders S1S3) are sending data to the three hosts on the right (receivers R1R3) by sharing a switched network that contains only one physical link. (For simplicity, assume that host S1 is sending data to host R1, and so on.) In this situation, three flows of data—corresponding to the three pairs of hosts—are multiplexed onto a single physical link by switch 1 and then demultiplexed back into separate flows by switch 2. Note that we are being intentionally vague about exactly what a flow of data corresponds to. For the purposes of this discussion, assume that each host on the left has a large supply of data that it wants to send to its counterpart on the right. There are several different methods for multiplexing multiple flows onto one physical link. One common method is synchronous time-division multiplexing (STDM). The idea of STDM is to divide time into equal-sized quanta and, in a round-robin fashion, give each flow a chance to send its data over the physical link. In other words, during time quantum 1, data from S1 to R1 is transmitted; during time quantum 2, data from S2 to R2 is transmitted; in quantum 3, S3 sends data to R3. At this point, the first flow (S1 to R1) gets to go again, and the process repeats. Another method is frequency-division multiplexing (FDM). The idea of FDM is to transmit each flow over the physical link at a different frequency, much the same way that the signals for different TV stations are transmitted at a different frequency on a physical cable TV link. Although simple to understand, both STDM and FDM are limited in two ways. First, if one of the flows (host pairs) does not have any data to send, its share of the physical link—that is, its time quantum or its frequency—remains idle, even if one of the other flows has data to transmit. For example, S3 had to wait its turn behind S1 and S2 in the previous paragraph, even if S1 and S2 had nothing to send. For computer communication, the amount of time that a link is idle can be very large—for example, consider the amount of time you spend reading a web page (leaving the link idle) compared to the time you spend fetching the page. Second, both STDM and FDM are limited to situations in which the maximum number of flows is fixed and known ahead of time. It is not practical to resize the quantum or to add additional quanta in the case of STDM or to add new frequencies in the case of FDM. The form of multiplexing that we make most use of in this book is called statistical multiplexing. Although the name is not all that helpful for understanding the concept, statistical multiplexing is really quite simple, with two key ideas. First, it is like STDM in that the physical link is shared over time—first data from one flow is transmitted over the physical link, then data from another flow is transmitted, and so on. Unlike STDM, however, data is transmitted from each flow on demand rather than during a predetermined time slot. Thus, if only one flow has data to send, it gets to transmit that data without waiting for its quantum to come around and thus without having to watch the quanta assigned to the other flows go by unused. It is this avoidance of idle time that gives packet switching its efficiency. As defined so far, however, statistical multiplexing has no mechanism to ensure that all the flows eventually get their turn to transmit over the physical link. That is, once a flow begins sending data, we need some way to limit the transmission, so that the other flows can have a turn. To account for this need, statistical multiplexing defines an upper bound on the size of the block of data that each flow is permitted to transmit at a given time. This limited-size block of data is typically referred to as a packet, to distinguish it from the arbitrarily large message that an application program might want to transmit. Because a packet-switched network limits the maximum size of packets, a host may not be able to send a complete message in one packet. The source may need to fragment the message into several packets, with the receiver reassembling the packets back into the original message. In other words, each flow sends a sequence of packets over the physical link, with a decision made on a packet-by-packet basis as to which flows packet to send next. Notice that if only one flow has data to send, then it can send a sequence of packets back-to-back. However, should more than one of the flows have data to send, then their packets are interleaved on the link. depicts a switch multiplexing packets from multiple sources onto a single shared link. The decision as to which packet to send next on a shared link can be made in a number of different ways. For example, in a network consisting of switches interconnected by links such as the one in the decision would be made by the switch that transmits packets onto the shared link. (As we will see later, not all packet-switched networks actually involve switches, and they may use other mechanisms to determine whose packet goes onto the link next.) Each switch in a packet-switched network makes this decision independently, on a packet-by-packet basis. One of the issues that faces a network designer is how to make this decision in a fair manner. For example, a switch could be designed to service packets on a first-in-first-out (FIFO) basis. Another approach would be to transmit the packets from each of the different flows that are currently sending data through the switch in a round-robin manner. This might be done to ensure that certain flows receive a particular share of the links b andwidth, or that they never have their packets delayed in the switch for more than a certain length of time. A network that attempts to allocate bandwidth to particular flows is sometimes said to support quality of service (QoS), a topic that we return to in Chapter 6. Also, notice in that since the switch has to multiplex three incoming packet streams onto one outgoing link, it is possible that the switch will receive packets faster than the shared link can accommodate. In this case, the switch is forced to buffer these packets in its memory. Should a switch receive packets faster than it can send them for an extended period of time, then the switch will eventually run out of buffer space, and some packets will have to be dropped. When a switch is operating in this state, it is said to be congested. The bottom line is that statistical multiplexing defines a cost-effective way for multiple users (e.g., host-to-host flows of data) to share network resources (links and nodes) in a fine-grained manner. It defines the packet as the granularity with which the links of the network are allocated to different flows, with each switch able to schedule the use of the physical links it is connected to on a per-packet basis. Fairly allocating link capacity to different flows and dealing with congestion when it occurs are the key challenges of statistical multiplexing. 1.2.3 Support for Common Services While the previous section outlined the challenges involved in providing costeffective connectivity among a group of hosts, it is overly simplistic to view a computer network as simply delivering packets among a collection of computers. It is more accurate to think of a network as providing the means for a set of application processes that are distributed over those computers to communicate. In other words, the next requirement of a computer network is that the application programs running on the hosts connected to the network must be able to communicate in a meaningful way. When two application programs need to communicate with each other, there are a lot of complicated things that need to happen beyond simply sending a message from one host to another. One option would be for application designers to build all that complicated functionality into each application program. However, since many applications need common services, it is much more logical to implement those common services once and then to let the application designer build the application using those services. The challenge for a network designer is to identify the right set of common services. The goal is to hide the complexity of the network from the application without overly constraining the application designer. Intuitively, we view the network as providing logical channels over which application-level processes can communicate with each other; each channel provides the set of services required by that application. In other words, just as we use a cloud to abstractly represent connectivity among a set of computers, we now think of a channel as connecting one process to another. shows a pair of application-level processes communicating over a logical channel that is, in turn, implemented on top of a cloud that connects a set of hosts. We can think of the channel as being like a pipe connecting two applications, so that a sending application can put data in one end and expect that data to be delivered by the network to the application at the other end of the pipe. Thechallengeistorecognize what functionality the channels should provide to application programs. For example, does the application require a guarantee that messages sent over the channel are delivered, or is it acceptable if some messages fail to arrive? Is it necessary that messages arrive at the recipient process in the same order in which they are sent, or does the recipient not care about the order in which messages arrive? Does the network need to ensure that no third parties are able to eavesdrop on the channel, or is privacy not a concern? In general, a network provides a variety of different types of channels, with each application selecting the type that best meets its needs. The rest of this section illustrates the thinking involved in defining useful channels. Identifying Common Communication Patterns Designing abstract channels involves first understanding the communication needs of a representative collection of applications, then extracting their common communication requirements, and finally incorporating the functionality that meets these requirements in the network. One of the earliest applications supported on any networ