Thursday, October 31, 2019

Accounting ds Coursework Example | Topics and Well Written Essays - 500 words

Accounting ds - Coursework Example A number of businesses use computerized systems to handle each step in their process of accounting. Companies usually improve their AISs to remain competitive in the industry and comply with the Sarbanes-Oxley Act of 2002 (Simkin, 2014). There are three types of Accounting Information Systems, namely manual systems, legacy systems and modern, integrated IT systems. An organization’s choice of the system to use depends on its size, business needs, and types of business and how sophisticated the business is (Simkin, 2014). A well and carefully designed AIS usually makes a business to run smoothly on daily basis, however, a poorly-designed one hampers its operations. Just like in the cases of Lehman brothers and WorldCom, the data in AIS can be used in uncovering the story of what actually went wrong. A successful business normally has an efficient and accurate accounting information system that is well maintained. Q2 A company purchased a cash register on January 1 for $5,400. This register has a useful life of 10 years and a salvage value of $400. What would be the depreciation expense for the second year of its useful life using the double-declining-balance method? Firstly, we record the ending balances from the bank statement. Secondly, prepare a detailed list of all the deposits in transit. Then we sum the two items. Thirdly, we prepare a detailed list of all the outstanding checks, checks written or sent but not cleared. We then correct any errors before getting the difference between the ending balance and the total outstanding to get adjusted bank balance. Fourthly, we adjust the general ledger balance by adding any interest received, subtracting NSF checks, correcting any errors and subtracting any service charges to get the adjusted general ledger balance. Finally, we compare the adjusted general ledger balance to the adjusted bank balance and the two items should agree. Q4 A company

Tuesday, October 29, 2019

Cia Research Paper Essay Example for Free

Cia Research Paper Essay Account of the work of the CIA, discussing in some detail the nature of the relationship between the intelligence-gatherer and the policy-maker. Since the 1970s the CIA has provided intelligence to Congress as well as to the executive, so that it now finds itself in a remarkable position, involuntarily poised nearly equidistant between them. It has not however abused this freedom of action, probably unique among world intelligence agencies, so as to cook intelligence. CIA deputy director. Robert M. Gates, a career intelligence officer, is Deputy Director of Central Intelligence. He served on the National Security Council staff from the spring of 1974 until December 1979. Tweet Close Style: MLA APA Chicago More Sharing Services Over the years, public views of the Central Intelligence Agency and its role in American foreign policy have been shaped primarily by movies, television, novels, newspapers, books by journalists, headlines growing out of congressional inquiries, exposes by former intelligence officers, and essays by experts who either have never served in American intelligence, or have served and still not understood its role. The CIA is said to be an invisible government, yet it is the most visible, most externally scrutinized and most publicized intelligence service in the world. While the CIA sometimes is able to refute publicly allegations and criticism, usually it must remain silent. The result is a contradictory melange of images of the CIA and very little understanding of its real role in American government. Because of a general lack of understanding of the CIA’s role, a significant controversy such as the Iran-contra affair periodically brings to the surface broad questions of the proper relationship between the intelligence service and policymakers. It raises questions of whether the CIA slants or cooks its intelligence analysis to support covert actions or policy, and of the degree to which policymakers (or their staffs) selectively use—and abuse—intelligence to persuade superiors, Congress or the public. Beyond this, recent developments, such as the massive daily flow of intelligence information to Congress, have complicated the CIA’s relationships with the rest of the executive branch in ways not at all understood by most observers—including those most directly involved. These questions and issues merit scrutiny. II The CIA’s role in the foreign policy process is threefold. First, the CIA is responsible for the collection and analysis of intelligence and its distribution to policymakers—principally to the president, the National Security Council (NSC) and the Departments of State and Defense; although in recent years many other departments and agencies have become major users of intelligence as well. This is a well-known area, and I will address it only summarily About CIA The Central Intelligence Agency was created in 1947 with the signing of the National Security Act by President Harry S. Truman. The act also created a Director of Central Intelligence (DCI) to serve as head of the United States intelligence community; act as the principal adviser to the President for intelligence matters related to the national security; and serve as head of the Central Intelligence Agency. The Intelligence Reform and Terrorism Prevention Act of 2004 amended the National Security Act to provide for a Director of National Intelligence who would assume some of the roles formerly fulfilled by the DCI, with a separate Director of the Central Intelligence Agency. The Director of the Central Intelligence Agency serves as the head of the Central Intelligence Agency and reports to the Director of National Intelligence. The CIA directors responsibilities include: †¢Collecting intelligence through human sources and by other appropriate means, except that he shall have no police, subpoena, or law enforcement powers or internal security functions; †¢Correlating and evaluating ntelligence related to the national security and providing appropriate dissemination of such intelligence; Providing overall direction for and coordination of the collection of national intelligence outside the United States through human sources by elements of the Intelligence Community authorized to undertake such collection and, in coordination with other departments, agencies, or elements of the United States Government which are authorized to undertake such collection, ensuring that the most effective use is made of resources and that appropriate account is taken of the risks to the United States and those involved in such collection; and †¢Performing such other functions and duties related to intelligence affecting the national security as the President or the Director of National Intelligence may direct. The function of the Central Intelligence Agency is to assist the Director of the Central Intelligence Agency in carrying out the responsibilities outlined above. To accomplish its mission, the CIA engages in research, development, and deployment of high-leverage technology for intelligence purposes. As a separate agency, CIA serves as an independent source of analysis on topics of concern and also works closely with the other organizations in the Intelligence Community to ensure that the intelligence consumer—whether Washington policymaker or battlefield commander—receives the best intelligence possible. As changing global realities have reordered the national security agenda, CIA has met these challenges by: †¢Creating special, multidisciplinary centers to address such high-priority issues such as nonproliferation, counterterrorism, counterintelligence, international organized crime and narcotics trafficking, environment, and arms control intelligence. †¢Forging stronger partnerships between the several intelligence collection disciplines and all-source analysis. †¢Taking an active part in Intelligence Community analytical efforts and producing all-source analysis on the full range of topics that affect national security. †¢Contributing to the effectiveness of the overall Intelligence Community by managing services of common concern in imagery nalysis and open-source collection and participating in partnerships with other intelligence agencies in the areas of research and development and technical collection. By emphasizing adaptability in its approach to intelli gence collection, the CIA can tailor its support to key intelligence consumers and help them meet their needs as they face the issues of the post-Cold War World. Posted: Dec 19, 2006 02:07 PM Last Updated: Jan 10, 2013 08:09 AM Last Reviewed: Dec 30, 2011 12:36 PM History of the CIA The United States has carried out intelligence activities since the days of George Washington, but only since World War II have they been coordinated on a government-wide basis. President Franklin D. Roosevelt appointed New York lawyer and war hero, William J. Donovan, to become first the Coordinator of Information, and then, after the US entered World War II, head of the Office of Strategic Services (OSS) in 1942. The OSS – the forerunner to the CIA – had a mandate to collect and analyze strategic information. After World War II, however, the OSS was abolished along with many other war agencies and its functions were transferred to the State and War Departments. It did not take long before President Truman recognized the need for a postwar, centralized intelligence organization. To make a fully functional intelligence office, Truman signed the National Security Act of 1947 establishing the CIA. The National Security Act charged the CIA with coordinating the nation’s intelligence activities and correlating, evaluating and disseminating intelligence affecting national security. On December 17, 2004, President George W. Bush signed the Intelligence Reform and Terrorism Prevention Act which restructured the Intelligence Community by abolishing the position of Director of Central Intelligence (DCI) and Deputy Director of Central Intelligence (DDCI) and creating the position the Director of the Central Intelligence Agency (D/CIA).

Sunday, October 27, 2019

Analysis of Indian Food in the UK Food Industry

Analysis of Indian Food in the UK Food Industry EXECUTIVE SUMMARY Eating out in U.K has become a haute gastronomical adventure with lip smacking results. Curry houses are a British institution, as much a part of the national fabric as the local pub. Surprisingly there are more Indian restaurants in London than in Delhi (Capital of India) (Hemisphere Magazine, 2005). The study was aimed at discovering the various problems that besiege the industry in UK. The dissertation weaves through various problem scenarios and the search to find it solutions. The three main problems which were discovered through face to face interviews were Problem of retaining customer through Service Quality Problem of retaining customer due to limited workforce Problem of promotion policy: advertising and sales promotion For these problems two theories of Hospitality marketing were chosen. These two theories i.e. Theory of Service quality and Promotion policy in restaurant industry were taken in conjunction with the fieldwork analysis of the restaurants in London. Problems were then discussed in parallel to the theories. The discussion gave rise to some hypothetical situations which were again tested in further research. The methodology used in the study was selected after careful consideration of the research question and the limitations. Using the appropriate research tools, an in-depth study was done and it was known that all three problems were not isolated in themselves rather they were well connected. The concept of Service Quality was seen missing extensively in the philosophies of the Restaurateurs. In a nutshell, it can be mentioned that nearly all problems seem to stem from deficiencies in service quality. However at this point, it should be noted that no single problem can be the main culprit nor a particular solution, a panacea for all ills. It is with this in mind that this study should be viewed. CHAPTER 1 INTRODUCTION For the purposes of this research, the term Indian food covers food from the Indian, Bengali and Pakistani traditions. The market includes sales through restaurants, pubs and takeaways. ready meals (both frozen and chilled) sauces pastes, accompaniments and curry powder. The introductory part of this research contains Present Scenario The largest ethnic minority group in Britain are Indians (approx 10,000,000 people) (Crown,2004) with over 40% of them (approx 800,000) living in the Capital i.e. London which contribute to 6% of the total population of London (LFC,2004). These facts justify the existence of over 1000 Indian restaurants in UK and 4000 only in London and the South east (Grove International,2004). The survival of these Curry Houses is a blessing for the true Indian food connoisseur. But recently the Indian Food Industry in UK have undergone some major structural changes. With the popping up of Giant restaurants in the Capital like the Cinnamon Club (Westminster), Tamarind (Queen Street) and Zaika (Kensington High Street) in the past couple of years, this has invited the interest of lot of the professional bodies like Time Out Guide, Evening Standards, Daily Telegraph, Financial Times etc. The various reviews (Iqbal Wahab,2004) given by them to the acclaimed Indian restaurants in London speak of their v aried interests. ‘Indian food is a  £3.2 billion industry in Britain, accounting for two-thirds of all eating out (Geraldine Bedell, May2004). This modern evolved Indian Cuisine in London has sparkled since the time when Tamarind and Zaika, Indian restaurants in London, were awarded the Michelin star. The famous dish ‘Chicken Tikka Masala is now an authentic English national dish (Robin Cook,2004) All these facts about this Industry makes it big and at the same time it evolves many prospects and problems in itself. Importance of Indian Restaurants In the last half-century, curry has become more traditionally English than English breakfast. Some fitting facts in this milieu are According to Mintel reports, Indian restaurants is  £ 1733 million industry in Britain which is more than two third of the total food industry in Britain.(Appendix 1) In an exclusive consumer survey commissioned by Mintel, 42% of the respondents stated that Indian/Bengali/Pakistani food was among the types of food that they most enjoyed, up from 38% in 1999. Indian food is most popular with 25-54-year-olds and, in contrast to Chinese food, shows a strong up market bias (Mintel, 05/2004) It is one of the biggest industries in Britain employing over 60,000 people (menu2menu, 2005) There are over 8500 Indian restaurants in UK and 3500 only in London (Grove International, 2004). Indian restaurants are the major players in Brits ethnic cuisines overshadowing Chinese outlets which are around 7400. (Mintel,2005) Indian restaurants serves 2.5 millions Brits every week besides David Beckham celebrated after scoring the goal that qualified England for the World Cup, at Manchesters Shimla Pinks, with his favorite chicken korma. Madonna, more and more the Anglophile, has apparently taken to ordering the taxi curry takeout from the Noor Jahan restaurant near her London home in Westbourne Grove (Guardian,2004, Issue 2). Every high street has its Star of India or Taj Mahal. Surprisingly twice as much Indian food is sold in Britain as fish and chips (Economist, 1999) and McDonalds have had to adapt their British menus to include â€Å"curry and spice†. These ubiquitous curry houses are coming up in the world. They are no longer consigned to the ranks of post-pub grub besides there is a gradual growth rate in the Indian restaurant market since 1999. (Appendix 1) Also the fact that Indian restaurants have a strong influence on the retail sector is undeniable. They have provided most of the recipes and are the sole benchmark for authenticity for products like Indian ready meals, sauces, pastes and accompaniments. UK Food Industry The food industry in the UK has undergone dramatic change over the last few decades, a phenomenon which has been named the consumption revolution [Ritson, C. and R. Hutchins (1991)]. Fragmentation of demand has been coupled with concentration in supply, so that the majority of food expenditure is now channeled through five major supermarket groups [Waterson, M. J. (1995)]. This has posed threats to the small agrifood producer, who is typically unable to meet the volume and consistency of supply requirements of the large retailers. However, opportunities have also arisen: many small producers have successfully targeted niche markets, often through direct marketing or distribution through independent outlets. Their offerings commonly carry the typical characteristics of niche products, in that they possess added value, are differentiated from competitive offerings and charge a premium price. With such characteristics it is possible for small producers to succeed within a highly competi tive environment [Phillips, M. (1994)]. However, recent opportunities have also arisen in the food multiple sectors, as supermarket groups show an increasing interest in stocking specialty and value-added food products. This interest stems in part from a desire to improve product range and enhance consumer choice. However, it could also be viewed as a response to public criticisms of the negative social and environmental effects of concentration in food distribution: in particular, the development of centralized distribution systems which mitigate against the use of smaller, local suppliers by food multiple chains. Some supermarket groups in the UK are now attempting to improve links with such suppliers, by, for example, devolving decision-making power to store managers, improving purchasing technology and creating opportunities for buyers and producers to meet and discuss one anothers needs [Carter, . Shaw (1993)]. There was a Greek community in Greek Street, London as long ago as 1677 so Greek cuisine is not exactly new to Britain. The influx of Cypriots started in the 1920s and 1930s and they began opening restaurants after the Second World War. Greek Cypriots tended to settle in Hackney, Palmers Green, Islington and Haringey and Turkish Cypriots in Stoke Newington. Greek Cypriots appeared in Soho in 1930s then Camden Town after the war and then Fulham by the mid 1960s. The main influx of Turkish Cypriots was in the 1960s and by 1971 the Greek Cypriot community had turned its attention to Wood Green, Palmers Green and Turnpike Lane. Only around one third of the 550 or so Greek restaurants in Britain are in London, most of these being in North and West London. Some 40% of the 150 or so Turkish restaurants are in the capital with a heavy concentration in North London. Turkish cuisine is also well represented in Scotland. One of the earliest Greek restaurants was not in London at all but Georges in St Michael Street, Southampton in 1940, slightly pre-dated by The White Tower in Londons West End in 1939. Kalamaras in London W2 opened in 1966 and remains popular today. The most successful of the Turkish restaurants at present is the Efes Group which started in London but is now in several locations throughout the country. Aims and objectives of the research The mechanisms of globalization has made the world a `smaller place and, while this has helped to introduce various cuisines to new regions, it has subsequently resulted in the development of `fusion foods, which has implications for the Indian restaurant market. The image of men behaving badly, gulping downing super-hot curries with several pints of lager, are long gone. Today, a trip out for a curry is a posh affair, with some of the countrys top chefs cooking up sophisticated dishes of complexity and variety. (LFC,2004) With these growing fashion of globalization, there is a huge threat to Indian restaurants which are traditionally managed by the family members. According to the Economist:- But once trends become clichà ©s they have a way of nose-diving. Open the pages of the â€Å"Good Curry Guide, and you will discover that all is not well. According to the guide, last year there were at least 300 closures of Indian restaurants in Britain, compared with just over a hundred openings. Indian restaurants, while still the biggest players in the industry, are losing market share eastern cuisine, such as Thai and Japanese food. (Economist, 2005) The main aim of the research is:  · To assess the major issues that determines the performance and efficiency of the Indian foods/restaurants in UK. The Objectives are to  · To Assess the Service quality and the Supply Chain Management.  · To Assess the consumer Perception towards Indian Foods and the relevant Marketing Mix to exploit the opportunities Rationale Indian Cuisine which the westerners commonly call ‘Curry is highly popularized by the Indian restaurants in UK. These restaurants which are generally owned by Indians reflect the specialty of every region of India. The spread of curry beyond its home in the sub-continent is inextricably linked to the presence of the British Raj in India. Army personnel and civil servants acquired a taste for spicy food whilst in India and brought their newly found dishes home. Since then spicy Indian dishes are highly liked by the people in UK. London is a hub of Indian foods and restaurants. With the growing area of specialization and people trying new and creative things in their restaurants in London the problems have started increasing. Problems of not only external environment like increasing competition , strict food and health policies or inflation, etc but also the internal problems which relate to the marketing strategies, sourcing of raw materials or inefficient management, etc. This study will explore SCM issues with reference to market fragility and market access; purchasing power; purchasing decisions and relationships; understanding of customer needs; barriers and frustrations; and strengths and successes. This report is premised on the belief that supply chains are important for maximizing efficiency. But supply chains are far more important than that: the management of supply chains increasingly influences the nature, scale and participation in enterprise development and sustainability. In other words, supply chains are re-structuring the lines of business development in knowledge-based economies. This study will further high light the consumer perception and the Marketing mix. CHAPTER 2 Literature Review 2.0 Chapter Overview As Indian Restaurants are a part of the hospitality industry, this chapter contains the literature taken from the subject of marketing in hospitality industry. Two main theories are used to analyze the three main problems stated in the previous chapters. They are * Service Quality and Supply Chain * Promotion Policy: Advertising and Sales promotion The two theories are then analyzed in light of the problems. A relationship is developed between the industry and theories by researching the trends. These theories are then used for drawing conclusions and recommendations in further chapters. For the reader, this chapter will be the base of understanding the ongoing trends in the Indian Restaurant industry. 2.1 Introduction to Hospitality Marketing in Restaurants Nowadays marketing isnt simply another function of business rather its a philosophy, a way of thinking and a way of organizing your business and your mind. The customer is the king (Iverson, 1989). According to Kotler (2000, Ch. 1), satisfying the customer is a priority in most businesses. But all customers cannot be satisfied. There has to be a proper selection of customers which enable the restaurants to meet its objectives. In the Restaurant industry, many people confuse marketing with advertising and sales promotion. It is not uncommon to hear restaurant managers say that they do not believe in marketing, when they actually mean that they are disappointed with the impact of their advertising. In reality, selling and advertising are only two marketing functions, and often not the most important. As Kotler said in his book, Marketing for Hospitality and Tourism (1996, Chapter-1), advertising and sales are components of the promotional element of the marketing mix. Other marketing mix elements include product, price and distribution. Marketing also includes research, information systems and planning. The aim of the marketing is to make selling superfluous. The aim is to know and understand customers so well that the product or service fits them and sells itself.(Drucker,1973,p. 64-65) The only way selling and promoting will be effective is if we first define customer targets and needs and then prepare an easily accessible and available value package. The purpose of a business is to create and maintain profitable customers. Customers are attracted and retained when their need are met. Not only do they return to the same restaurants but they also talk favorably to others about their satisfaction. Customer satisfaction leading to profit is the central goal of Hospitality Marketing. (Kotler Bowen Makens, 1996, Chapter- 1) Fewer repeat customers and bad words of mouth are deeds of the manager who interprets profits above customer satisfaction. A successful manager will consider profits only as the result of running a business well, rather then its sole purpose. So in this service based industry (Indian restaurants) the entrance of corporate giants with mesmerizing marketing skills have increased the importance of marketing within the industry. Now lest see how far these Hospitality marketing stunts can save the appalling scene in the industry. 2.2 Service Quality Daryl Wyckoff has defined service quality as, â€Å"Quality is the degree of excellence intended, and the control of variability in achieving that excellence, in meeting customers requirements.† ( Wyckoff, 1984, p 81) This theorem of quality is however not accurate as experts says ‘Quality is whatever the customer says it is and the quality of a particular product or service is whatever the customer perceives it to be (Powers,2000, p 179). So the main emphasis is on the customer and perceived quality. A more professional way of looking at quality is by conceptualizing it broadly along the two critical dimensions i.e. technical quality and Interpersonal quality. Technical Quality is generally the minimum expected from a hospitality operation.(Did things go right, Was the food hot) (Powers, 1997). This dimension of quality is relatively objective in nature and is thus measurable. Interpersonal Quality is a comparatively difficult dimension (Was the waiter friendly? Did the service staff go out of their way to be helpful? Did the customer feel welcome or out of place?) As Gronroos (1980) points out â€Å"Even when an excellent solution is achieved, the firm may be unsuccessful, if the excellence in technical quality is counteracted by badly managed buyer-seller interactions.† And vice versa the charm in this world will not make up for bad food or a lost reservation. So each dimension is critical. 2.3 Concept of building customer satisfaction through quality The fundamental strategic decision to be taken by the Indian Food manufactures at the outset is to consider the service system either standardized or routine/customized. In the former, more importance is given to technical quality, operation goes by the book and little importance is paid to employees discretion. While the later gives importance to both qualities and more discretion is given to the employee. Customized system of service is recommended to the restaurants as consumers go to the restaurant that they believe offers the highest customer delivered value or customer satisfaction i.e. the difference between total customer value and total customer cost: * The customer derives value from the core products, the service delivery system and restaurants image. * The costs to the customer include money, time, energy and physic costs. Quality is made up of two components viz. technical and interpersonal. Managers must keep in mind that in the end the customer perceptions of the delivered quality are what is important. Customers assess delivered services against their expectations. If perceived service meets expectations, they view the service as good quality. If perceived service falls short of expectations, they view the service as poor. Expectations are formed by past experiences with the restaurants, word of mouth, the restaurants external communication and publicity. A widely used model of service quality is known as the five gap model. This model defines service quality as meeting customer expectations. The principle behind the formation of this model was to discover the expectation of the customer which is possibly the most critical step in delivering service quality. This model is closely linked to marketing since it is customer based. This model has five gaps, Gap 1: Consumer expectations versus Management Perception Gap 2: Management Perception versus Service Quality Specifications Gap 3: Service Quality Specifications versus Service Delivery Gap 4: Service Delivery versus External Communications Gap 5: Expected Service versus Perceived Service The detail study of this 5 gap model is out of the boundary of this research. But the question is whether this aspect can solve the issue, can it benefits the industry? The answer is discussed in Chapter 4. 2.4 Supply Chain Most Important aspect for increasing service Quality performance is Supply Chain Integration. Effective Supply Chain Management can:- * Cut Down The Total Cost Significantly. * Increase the productivity and Performance. * Improve time and labour economy. * Can differentiate Service quality. * Can provide optimum Speed and comfort in quality Service delivery. In other words it provides better economy of scale and competitive advantage. The Value Chain Source: Johnshon and Scholes, 2004 The Value Chain will be discussed in the essence of the Supply Chain Management Issues. These elements of a brand are illustrated in 1. It has long been recognized that products have meanings for consumers beyond providing mere functional utility. Symbolic consumption was recognized by Veblen (1899) in his Theory of the Leisure Class and termed conspicuous consumption. Noth (1988) quotes Karl Marx and his metaphor of â€Å"the language of commodities† in which â€Å"the linen conveys its thoughts† (p. 175) while Barthes (1964) discussed a semiotic threshold with the semiotic existing above the â€Å"utilitarian or functional aspects† of objects. Given the symbolic usage of brands it is no surprise that semiotics, as the study of signs in society, is increasingly being used in understanding consumer behavior. Initially used in facilitating understanding of the consumption behavior surrounding cultural products such as film and other works of art (Holbrook and Grayson, 1986) and fashion (Barthes, 1983), its widespread usage to interpret symbolic consumption in all aspects of consumer behavior is anticipated (Mick, 1986). The theory behind this research technique is that brand equity is built on consumers perception of the emotional benefits or brand affinity, combined with physical or Concrete benefits The performance delivered by the product or service offered. The technique attempts to evaluate each of these two aspects in detail, providing a clear understating of its importance for the category under investigation as well as for the brands in that category. During the development of this technique we identified and coded the emotional factor that repeatedly appeared in all markets in the study, allowing us to conclude that They are valid for virtually any product or service category when the subject is brand equity evaluation. These aspects can be classified into three groups: brand authority, level of identification that the user or consumer has with its positioning, and level of social approval it offers to its user or consumer. Authority might be defined by the brands heritage or long-standing reputation and leadership, by the trust or confidence it inspires to consumers, and by aspects associated to innovation or technological development as perceived by consumers. Thus all the branding theories leads to the consumers Perception. 3.2 Consumer Perceptions Of Foods Investigation and analysis of food purchase and consumption is well-documented within the discipline of consumer behavior. Studies in this area tend to stress the complexity of factors which drive food-related tastes and preferences, and some authors have proposed models which attempt to categories and integrate these factors and so offer insights into the formation of food preferences and choices. Shepherd. R, (1989) provides a review of such models, from Yudkin, J. (1956), which lists physical, social and physiological factors, to Booth and Shepherd (1988) which summarizes the processes influencing, and resulting from food acceptance, and lists factors relating to the food, the individual and the environment. However, none of these models incorporate a consideration of the role of place in food, and consumer perceptions of this attribute. It may be noted that, by their very nature, food products have a land-based geographical origin (Bà ©rard, L. and P. Marchenay 1995), which would suggest that people readily make strong associations between certain foods and geographical locations. On the other hand, the process of delocalization of the food system in the twentieth century, as described by Montanari , (1994) has weakened the traditional territorial and symbolic links between foods and places. The inference is that the concept of Indianity in foods may no longer be important or attractive to the modern food consumer, who is faced with such a wide array of exotic and international products all year round. Thus it may be that in the mind of the consumer, specific names, production methods or presentational forms of particular foods are no longer associated with the geographic areas from which they originate. An opposing view is taken by Driver, (1983) however, who describes resurgence in the interest in traditional Ind ian dishes in the UK, which perhaps reflects the symbolic importance that particular foods have in our lives and culture. These debates highlight the need for empirical investigation of peoples perceptions and understandings of Indianity in food. Linked to this debate of the perceived meaning of Indianity in foods is the concept of authenticity. If Indian foods are linked in some way to origins and tradition, it implies that producers of Indian foods are involved in providing and communicating intangible attributes of heritage, tradition and authenticity in their product offerings. These require careful management, particularly in view of authors such as MacCannell [1989], Hughes, (1995) and Urry, (1995), who, in relation primarily to tourist experiences, point out the difficulty in defining what is authentic, and in communicating this to an increasingly sophisticated and diverse audience of consumers. In relation to Indian foods, information is needed on consumer perceptions of app ropriate attributes of products, which are the most attractive and why. CHAPTER 4 METHODOLOGY A-RESEARCH PHILOSOPHY APPROACH In the previous chapters, author has outlined research aim and objectives with examining the relevant literature review. However, the successful completion of any study is heavily dependent on the choice of an appropriate research method and approach. Moreover, the appropriate research methodology provides guidance for the development and evaluation process of study. Wit the appropriate methodology the author can justify the achievement of the objective. Research process The research process adopted is based on exploratory approach, but prior to that it is necessary to highlight upon the methodological frame work. The recognized exponents in this field are Hussey Hussey (1997), Zikmund (2000), Saunders et al (1997, 2000) and others who presented different methodological framework from which researchers can conduct their research. Most of these frameworks follow certain similar central theme. The author has adapted the below-illustrated methodological framework to fulfill the research aim and objectives. This is chosen, as it supports the author research design and process, Furthermore, methodology has been designed where data is collected and interpreted. The findings and analysis with conclusions and recommendations at the end follow this. METHODOLOGY Research aim The main aim of the research is:  · To assess the major issues that determines the performance and efficiency of the Indian foods/restaurants in UK. Research objectives  · To Assess the Service quality and the Supply Chain Management.  · To Assess the consumer Perception towards Indian Foods and the relevant Marketing Mix to exploit the opportunities Research Philosophy Easterby-Smith et al (1993) states three reasons why it is useful to state the research philosophy about proposed research before collecting data: * To clarify the research design-the method by which data is collected and analyzed-taking a holistic view of overall configuration. * To help recognize which designs will work and which will not * To help identify and create research design to adopt research approach according to the required research aim and objectives. There are two main types of research philosophies in existing literature. They are Positivism and Phenomenological. â€Å"They are different, if not mutually exclusive, views about the way in which knowledge is developed and judged as being acceptable. They have an important part to play in business and management research†. (Saunders et al, 2005, p 83) The positivistic philosophy which â€Å"seeks the facts or causes of social phenomena†(Hussey Hussey,1998) is more objective, analytical and structured and the researcher is independent of the subject. (Remenyi et al., 1998:33). In addition, the quantitative data should be collected and statistical analyzed when test the certain theories.(Saunders et al, 2005, Hussey Hussey,1998) On the other hand phenomenological philosophy which â€Å"understanding human behavior from the participants own frame of reference† (Hussey Hussey, 1998) is more subjective and the researcher is dependent on their mind. Qualitative method can be used such as a case study. It is important that which philosophy is better for my project. Saunders et al. (2005) state that no philosophy is better than others so choosing philosophy depends on the research question. Having considered the aims of this research project, I will choose phenomenological philosophy because this research question is â€Å"How the Supply Chain helps the Indian Food Industry in UK in achieving efficiency and the significance of Consumer perception to the marketing mix†. The research will be qualitative. In order to answer the research question, I would do case study on Chinese and UK textile and clothing firms and collect data by using interviews. Research Approach Inductive or Deductive Research Undoubtedly the research approach is very important for the project. There are two research approaches, which is the deductive approach and the inductive approach. As mentioned in Saunders et al (2000), the major differences between the deductive and inductive approaches to research are as follows: Deduction emphasis Induction emphasis Scientific principles Gaming an understanding of the meaning humans attach to events The need to explain cause and effect relationship between variables A close understanding of the research context The collection of quantitative data The collection of qualitative data The application of controls to ensure clarity of definition and highly structured A more flexible structure to permit changes of research emphasis as the research progress Researchers independence of what is being researched A realization that the researchers is a part of research progress The necessity to select sample of sufficient size in order to generalize conclusion Less concerned with the need to generalize Deductive approach aims to develop a theory and or hypothesis and design a research strategy to test it. Deductive approach is a rigid methodology, which not permits alternative explanation. It emphasizes on scientific principles and moving from theory to data. It is a highly structured approach and need more operationalisation of concepts to ensure definition. Oppositely inductive approach is which the researcher would collect data and develop a theory as a result of data analysis. It is an alternative approach and theory building followed data collection. In addition, it is the better way to study the small sample because of concerning with the context in which the events are taking place. (Saunders et al, 2005, p 85) Easterby-Smith et al. (2004) state that if the researcher have interested in understanding why something happening the inductive approach is more appropriate. Having considered the aims of this research project, it seems that inductive approach is more suitable. First ly, according to Saunders et al (2005), inductive approach is closely related to phenomenology. Secondly, although there are many author contributed to theories about international branding but not specifi

Friday, October 25, 2019

Reminiscencia de la infancia: el caso de un escritor de los siglos XX y :: Foreign Language Spanish Essays

Reminiscencia de la infancia: el caso de un escritor de los siglos XX y La primera primera ficcià ³n narrativa de Medardo Fraile, uno de los maestros de la Edad de Oro del cuento espaà ±ol contemporà ¡neo, surgià ³ a la edad de cinco aà ±os. La temprana edad de su escritura nos lleva a investigar sobre los hechos que acompaà ±aron su infancia y que pudieron despertar en à ©l esa necesidad de crear. La lectura de su obra narrativa, vinculada a su biografà ­a, asà ­ como alguno de sus numerosos artà ­culos, nos confirman el efecto que produjo en Medardo Fraile nià ±o la ausencia de su madre, fallecida meses antes del surgimiento de ese primer cuento. En el artà ­culo â€Å"Crà ³nica de mà ­ mismo y alrededores† el escritor nos dice: â€Å"Hasta los cinco aà ±os, mi vida estuvo condicionada por la enfermedad de mi madre, que murià ³ a los treinta y tres aà ±os de una cardiopatà ­a de tipo reumà ¡tico cuando yo tenà ­a cinco† (70). En su novela Autobiografà ­a encontramos este mismo hecho transformado en ficcià ³n: Al llegar al portal se soltà ³ y subià ³ la escalera a saltos. La puerta estaba entornada. La empujà ³ y se lanzà ³ a la alcoba a besar a su madre. Abrià ³ la puerta y vio el cuarto vacà ­o y el balcà ³n de par en par y, en un rincà ³n un montà ³n de lana. Alguien le llevà ³ al comedor, mientras el beso que le corrà ­a prisa dar, ahorrado tantos dà ­as, se le anudaba incrà ©dulo en el cuerpo, en el vacà ­o, en el aire. (236-37) Los sentimientos de ausencia y soledad producidos en el jovencà ­simo Medardo Fraile por la muerte de su madre, influyen decisivamente en el desarrollo de su oficio, primero en Espaà ±a, y posteriormente, a partir de 1964 en el Reino Unido, donde vive en la actualidad. En â€Å"El interà ©s del Psicoanà ¡lisis para la Està ©tica† (1913) Freud nos recuerda que hay conexià ³n entre las impresiones infantiles y los destinos del artista y sus obras, como reacciones a tales impulsos. La muerte de la madre de Medardo Fraile constituye un momento crucial aunque todavà ­a temprano del desarrollo de su escritura, en ese despertar de su mente creadora. En â€Å"Mà ¡s de cien cuentos en busca de su autor† el escritor nos describe ese momento inicial de ficcià ³n narrativa: El primer cuento que recuerdo –y si lo recuerdo serà ¡ por algo–, lo hilvanà © en Madrid, oralmente, a los cinco aà ±os, en un banco de la calle Princesa. Mi madre habà ­a muerto meses antes y yo vivà ­a en nuestra casa con mi padre, casi siempre ausente, y mi madrina. Aquel dà ­a salà ­ de mi colegio necesitando un paà ±uelo, no sà © por quà ©.

Thursday, October 24, 2019

Employment and Unemployment in the 1930s

The Great Depression is to economics what the Big Bang is to physics. As an event, the Depression is largely synonymous with the birth of modern macroeconomics, and it continues to haunt successive generations of economists. With respect to labor and labor markets, these facts evidently include wage rigidity, persistently high unemployment rates, and long-term joblessness. Traditionally, aggregate time series have provided the econometric grist for distinguishing explanations of the Great Depression.Recent research on labor markets in the 1930s, however, has shifted attention from aggregate to disaggregate time series and towards microeconomic evidence. This shift in focus is motivated by two factors. First, disaggregated data provide many more degrees of freedom than the decade or so of annual observations associated with the depression, and thus may prove helpful in distinguishing macroeconomic explanations. Second, disaggregation has revealed aspects of economic behavior hidden in the time series but which may be essential to their proper interpretation and, in any case, are worthy of study in their own right.Although the substantive findings of recent research are too new to judge their permanent significance, I believe that the shift towards disaggregated analysis is an important contribution. The paper begins by reviewing the conventional statistics of the United States labor market during the Great Depression and the paradigms to explain them. It then turns to recent studies of employment and unemployment using disaggregated data of various types. The paper concludes with discussions of research on other aspects of labor markets in the 1930s and on a promising source of microdata for future work.My analysis is confined to research on the United States; those interested in an international perspective on labor markets might begin with Eichengreen and Hatton's chapter in their edited volume, Interwar Unemployment in International Perspective, and the vario us country studies in that volume. I begin by reviewing two standard series of unemployment rates, Stanley Lebergott's and Michael Darby's, and an index of real hourly earnings in manufacturing compiled by the Bureau of Labor Statistics (BLS).The difference between Lebergott's and Darby's series, which is examined later in the paper, concerns the treatment of persons with so-called â€Å"work relief† jobs. For Lebergott, persons on work relief are unemployed, while Darby counts them as employed. Between 1929 and 1933 the unemployment rate increased by over 20 percentage points, according to the Lebergott series, or by 17 percentage points, according to Darby's series. For the remainder of the decade, the unemployment rate stayed in, or hovered around, double digits. On the eve of America's entry into World War Two, between 9. and 14. 6 percent of the labor force was out of work, depending on how unemployment is measured. In addition to high levels of unemployment, the 1930s w itnessed the emergence of widespread and persistent long-term unemployment (unemployment durations longer than one year) as a serious policy problem. According to a Massachusetts state census taken in 1934, fully 63 percent of unemployed persons had been unemployed for a year or more. Similar amounts of long-term unemployment were observed in Philadelphia in 1936 and 1937.Given these patterns of unemployment, the behavior of real wages has proven most puzzling. Between 1929 and 1940 annual changes in real wages and unemployment were positively correlated. Real wages rose by 16 percent between 1929 and 1932, while the unemployment rate ballooned from 3 to 23 percent. Real wages remained high throughout the rest of the decade, although unemployment never dipped below 9 percent, no matter how it is measured. From this information, the central questions appear to be: Why did unemployment remain persistently high throughout the decade?How can unemployment rates in excess of 10 to 20 perc ent be reconciled with the behavior of real wages, which were stable or increasing? One way of answering these questions is to devise aggregative models consistent with the time series, and I briefly review these attempts later in the paper. Before doing so, however, it is important to stress that the aggregate statistics are far from perfect. No government agency in the 1930s routinely collected labor force information analogous to that provided by today's Current Population Survey.The unemployment rates just discussed are constructs, the differences between intercensal estimates of labor force participation rates and employment-to-population ratios. Because unemployment is measured as a residual, relatively small changes in the labor force or employment counts can markedly affect the estimated unemployment rate. The dispute between Darby and his critics over the labor force classification of persons on work relief is a manifestation of this problem. Although some progress has been made on measurement issues, there is little doubt that further refinements to the aggregate unemployment eries would be beneficial. Stanley Lebergott has critically examined the reliability of BLS wage series from the 1930s. The BLS series drew upon a fixed group of manufacturing establishments reporting for at least two successive months. Lebergott notes several biases arising from this sampling method. Workers who were laid off, he claims, were less productive and had lower wages than average. Firms that went out of business were smaller, on average, than firms that survived, and tended to have lower average wages.In addition, the BLS oversampled large firms, and Lebergott suspects that large firms were more adept at selectively laying off lower- productivity labor; more willing to deskill, that is, reassign able employees to less-skilled jobs; and more likely to give able employees longer work periods. A rough calculation suggests that accounting for these biases would produce a n aggregate decline in nominal wages between 1929 and 1932 as much as 48 percent larger than that measured by the BLS series.Although the details of Lebergott's calculation are open to scrutiny, the research discussed elsewhere in the paper suggests that he is correct about the existence of biases in the BLS wage series. For much of the period since World War Two, most economists blamed persistent unemployment on wage rigidity. The demand for labor was a downward sloping function of the real wage but since nominal wages were insufficiently flexible downward, the labor market in the 1930s was persistently in disequilibrium.Labor supply exceeded labor demand, with mass unemployment the unfortunate consequence. Had wages been more flexible, this viewpoint holds, employment would have been restored and Depression averted. The frontal attack on the conventional wisdom was Robert E. Lucas and Leonard Rapping. The original Lucas-Rapping set-up continued to view current labor demand as a ne gative function of the current real wage. Current labor supply was a positive function of the real wage and the expected real interest rate, but a negative function of the expected future wage.If workers expect higher real wages in the future or a lower real interest rate, current labor supply would be depressed, employment would fall, unemployment rise, and real wages increase. Lucas and Rapping offer an unemployment equation, relating the unemployment rate to actual versus anticipated nominal wages, and actual versus anticipated price levels. Al Rees argued that the Lucas-Rapping model was unable to account for the persistence of high unemployment coincident with stable or rising real wages. Lucas and Rapping conceded defeat for the period 1933 to 1941, but claimed victory for 1929 to 1933.As Ben Bernanke pointed out, however, their victory rests largely on the belief that expected real interest rates fell between 1929 and 1933, while â€Å"ex post, real interest rates in 1930-33 were the highest of the century†. Because nominal interest rates fell sharply between 1929 and 1933, whether expected real rates fell hinges on whether deflation — which turned out to be considerable — was unanticipated. Recent research by Steven Cecchetti suggests that the deflation was, at least in part, anticipated, which appears to undercut Lucas and Rapping's reply.In a controversial paper aimed at rehabilitating the Lucas-Rapping model, Michael Darby redefined the unemployment rate to exclude persons who held work relief jobs with the Works Progress and Work Projects Administrations (the WPA) or other federal and state agencies. The convention of the era, followed by Lebergott, was to count persons on work relief as unemployed. According to Darby, however, persons with work relief jobs were â€Å"employed† by the government: â€Å"From the Keynesian viewpoint, labor voluntarily employed on contracyclical †¦ government projects should certainly be counted as employed.On the search approach to unemployment, a person who accepts a job and withdraws voluntarily from the activity of search is clearly employed. † The exclusion of persons on work relief drastically lowers the aggregate unemployment rate after 1935. In addition to modifying the definition of unemployment, Darby also redefined the real wage to be the average annual earnings of full-time employees in all industries. With these changes, the fit of the Lucas-Rapping unemployment equation is improved, even for 1934 to 1941. However, Jonathan Kesselman and N. E.Savin later showed that the improved fit was largely the consequence of Darby's modified real wage series, not the revised unemployment rate. Thus, for the purpose of empirically testing the Lucas-Rapping model, the classification of WPA workers as employed or unemployed is not crucial. Returning to the questions posed above, New Deal legislation has frequently been blamed for the persistence of high unem ployment and the perverse behavior of real wages. In this regard, perhaps the most important piece of legislation was the National Industrial Recovery Act (NIRA) of 1933.The National Recovery Administration (NRA), created by the NIRA, established guidelines that raised nominal wages and prices, and encouraged higher levels of employment through reductions in the length of the workweek (worksharing). An influential study by Michael Weinstein econometrically analyzed the impact of the NIRA on wages. Using aggregate monthly data on hourly earnings in manufacturing, Weinstein showed that the NIRA raised nominal wages directly through its wage codes and indirectly by raising prices.The total impact was such that â€Å"[i]n the absence of the NIRA, average hourly earnings in manufacturing would have been less than thirty-five cents by May 1935 instead of its actual level of almost sixty cents (assuming unemployment to have been unaltered)†. It is questionable, however, whether the NIRA really this large an impact on wages. Weinstein measured the direct effect of the codes by comparing monthly wage changes during the NIRA period (1933-35) with wage changes during the recovery phase (1921-23) of the post-World War One recession (1920-21), holding constant the level of unemployment and changes in wholesale prices.Data from the intervening years (1924-1932) or after the NIRA period were excluded from his regression analysis (p. 52). In addition, Weinstein's regression specification precludes the possibility that reductions in weekly hours (worksharing), some of which occurred independently of the NIRA, had a positive effect on hourly earnings. A recent paper using data from the full sample period and allowing for the effect of worksharing found a positive but much smaller impact of the NIRA on wages (see the discussion of Bernanke's work later in the paper).Various developments in neo-Keynesian macroeconomics have recently filtered into the discussion. Martin Bai ly emphasizes the role of implicit contracts in the context of various legal and institutional changes during the 1930s. Firms did not aggressively cut wages when unemployment was high early in the 1930s because such a policy would hurt worker morale and the firm's reputation, incentives that were later reinforced by New Deal legislation. Efficiency wages have been invoked in a provocative article by Richard Jensen.Beginning sometime after the turn of the century large firms slowly began to adopt bureaucratic methods of labor relations. Policies were â€Å"designed to identify and keep the more efficient workers, and to encourage other workers to emulate them. † Efficiency wages were one such device, which presumably contributed to stickiness in wages. The trend towards bureaucratic methods accelerated in the 1930s. According to Jensen, firms surviving the initial downturn used the opportunity to lay off their least productive workers but a portion of the initial decline in e mployment occurred among firms that went out of business.Thus, when expansion occurred, firms had their pick of workers who had been laid off. Personnel departments used past wage histories as a signal, and higher-wage workers were a better risk. Those with few occupational skills, the elderly (who were expensive to retrain) and the poorly educated faced enormous difficulties in finding work. After 1935 the â€Å"reserve army† of long-term unemployed did not exert much downward pressure on nominal wages because employers simply did not view the long-term unemployed as substitutes for the employed at virtually any wage.A novel feature of Jensen's argument is its integration of microeconomic evidence on the characteristics of the unemployed with macroeconomic evidence on wage rigidity. Other circumstantial evidence is in its favor, too. Productivity growth was surprisingly strong after 1932, despite severe weakness in capital investment and a slowdown in innovative activity. Th e rhetoric of the era, that â€Å"higher wages and better treatment of labor would improve labor productivity†, may be the correct explanation.If the reserve army hypothesis were true, the wages of unskilled workers, who were disproportionately unemployed, should have fallen relative to the wages of skilled and educated workers, but there is no indication that wage differentials were wider overall in the 1930s than in the 1920s. It remains an open question, however, whether the use of efficiency wages was as widespread as Jensen alleges, and whether efficiency wages can account empirically for the evolution of productivity growth in the 1930s. In brief, the macro studies have not settled the debate over the proper interpretation of the aggregate statistics.This state of affairs has much to do with the (supreme) difficulty of building a consensus macro model of the depression economy. But it is also a consequence of the level of aggregation at which empirical work has been con ducted. The problem is partly one of sample size, and partly a reflection of the inadequacies of discussing these issues using the paradigm of a representative agent. This being, the case I turn next to disaggregated studies of employment and unemployment. In a conventional short-run aggregate production function, the labor input is defined to be total person-hours.For the postwar period, temporal variation in person-hours is overwhelmingly due to fluctuations in employment. However, for the interwar period, variations in the length of the workweek account for nearly half of the monthly variance in the labor input. Declines in weekly hours were deep, prolonged, and widespread in the 1930s. The behavior of real hourly earnings, however, may have not have been independent of changes in weekly hours. This insight motivates Ben Bernanke's analysis of employment, hours, and earnings in eight pre-World War Two manufacturing industries.The (industry- specific) supply of labor is described by an earnings function, which gives the minimum weekly earnings required for a worker to supply a given number of hours per week. In Bernanke's formulation, the earnings function is convex in hours and also discontinuous at zero hours (the discontinuity reflects fixed costs of working or switching industries). Production depends separately on the number of workers and weekly hours, and on nonlabor inputs. Firms are not indifferent â€Å"between receiving one hour of work from eight different workers and receiving eight hours from one worker. A reduction in product demand causes the firm to cut back employment and hours per week. The reduction in hours means more leisure for workers, but less pay per week. Eventually, as weekly hours are reduced beyond a certain point, hourly earnings rise. Further reductions in hours cannot be matched one for one by reductions in weekly earnings. But, when hourly earnings increase, the real wage then appears to be countercyclical. To test the mode l, Bernanke uses monthly, industry-level data compiled by the National Industrial Conference Board covering the period 1923 to 1939.The specification of the earnings function (describing the supply of labor) incorporates a partial adjustment of wages to prices, while the labor demand equation incorporates partial adjustment of current demand to desired demand. Except in one industry (leather), the industry demand for workers falls as real product wages rise; industry demands for weekly hours fall as the marginal cost to the firm of varying weekly hours rises; and industry labor supply is a positive function of weekly earnings and weekly hours.The model is used to argue that the NIRA lowered weekly hours and raised weekly earnings and employment, although the effects were modest. In six of the industries (the exceptions were shoes and lumber), increased union influence after 1935 (measured with a proxy variable of days idled by strikes) raised weekly earnings by 10 percent or more. S imulations revealed that allowing for full adjustment of nominal wages to prices resulted in a poor description of the behavior of real wages, but no deterioration in the model's ability to explain employment and hours variation.Whatever the importance of sticky nominal wages in explaining real wage behavior, the phenomenon â€Å"may not have had great allocative significance† for employment. In a related paper, Bernanke and Martin Parkinson use an expanded version of the NICB data set to explore the possibility that â€Å"short-run increasing returns to labor† or procyclical labor productivity, characterized co-movements in output and employment in the 1930s. Using their expanded data set, Bernanke and Parkinson estimate regressions of the change in output on the change in labor input, now defined to be total person-hours.The coefficient of the change in the labor input is the key parameter; if it exceeds unity, then short-run increasing returns to labor are present. Bernanke and Parkinson find that short-run increasing returns to labor characterized all but two of the industries under study (petroleum and leather). The estimates of the labor coefficient are essentially unchanged if the sample is restricted to just the 1930s. Further, a high degree of correlation (r = 0. 9) appears between interwar and postwar estimates of short-run increasing returns to labor for a matched sample of industries.Thus, the procyclical nature of labor productivity appears to be an accepted fact for both the interwar and postwar periods. One explanation of procyclical productivity, favored by real business cycle theorists, emphasizes technology shocks. Booms are periods in which technological change is unusually brisk, and labor supply increases to take advantage of the higher wages induced by temporary gains in productivity (caused by the outward shift in production functions).In Bernanke and Parkinson's view, however, the high correlation between the pre- and post -war estimates of short-run increasing returns to labor poses a serious problem for the technological shocks explanation. The high correlation implies that the â€Å"real shocks hitting individual industrial production functions in the interwar period accounted for about the same percentage of employment variation in each industry as genuine technological shocks hitting industrial production functions in the post-war period†.However, technological change per se during the Depression was concentrated in a few industries and was modest overall. Further, while real shocks (for example, bank failures, the New Deal, international political instability) occurred, their effects on employment were felt through shifts in aggregate demand, not through shifts in industry production functions. Other leading explanations of procyclical productivity are true increasing returns or, popular among Keynesians, the theory of labor hoarding during economic downturns.Having ruled out technology s hocks, Bernanke and Parkinson attempt to distinguish between true increasing returns and labor hoarding. They devise two tests, both of which involve restrictions on excluding proxies for labor utilization from their regressions of industry output. If true increasing returns were present, the observed labor input captures all the relevant information about variations in output over the cycle. But if labor hoarding were occurring, the rate of labor utilization, holding employment constant, should account for output variation.Their results are mixed, but are mildly in favor of labor hoarding. Although Bernanke's modeling effort is of independent interest, the substantive value of his and Parkinson's research is enhanced considerably by disaggregation to the industry level. It is obvious from their work that industries in the 1930s did not respond identically to decreases in output demand. However, further disaggregation to the firm level can produce additional insights. Bernanke and P arkinson assume that movements in industry aggregates reflect the behavior of a representative firm.But, according to Lebergott (1989), much of the initial decline in output and employment occurred among firms that exited. Firms that left, and new entrants, however, were not identical to firms that survived. These points are well-illustrated in Timothy Bresnahan and Daniel Raff's study of the American motor vehicle industry. Their database consists of manuscript census returns of motor vehicle plants in 1929, 1931, 1933, and 1935. By linking the manuscript returns from year to year, Bresnahan and Raff have created a panel dataset, capable of identifying plants the exited, surviving plants, and new plants.Plants that exited between 1929 and 1933 had lower wages and lower labor productivity than plants that survived. Between 1933 and 1935 average wages at exiting plants and new plants were slightly higher than at surviving plants. Output per worker was still relatively greater at surv iving plants than new entrants, but the gap was smaller than between 1929 and 1933. Roughly a third of the decline in the industry's employment between 1929 and the trough in 1933 occurred in plant closures. The vast majority of these plant closures were permanent.The shakeout of inefficient firms after 1929 ameliorated the decline in average labor productivity in the industry. Although industry productivity did decline, productivity in 1933 would have been still lower if all plants had continued to operate. During the initial recovery phase (1933-35) about 40 percent of the increase in employment occurred in new plants. Surviving plants were more likely to use mass-production techniques; the same was true of new entrants. Mass production plants differed sharply from their predecessors (custom production plants) in the skill mix of their workforces and in labor relations.In the motor vehicle industry, the early years of the Depression were an â€Å"evolutionary event†, perman ently altering the technology of the representative firm. While the representative firm paradigm apparently fails for motor vehicles, it may not for other industries. Some preliminary work by Amy Bertin, Bresnahan, and Raff, on another industry, blast furnaces, is revealing on this point. Blast furnaces were subject to increasing returns and the market for the product (molten iron) was highly localized.For this industry, reductions in output during a cyclical trough are reasonably described by a representative firm, since â€Å"localized competition prevented efficient reallocation of output across plants† and therefore the compositional effects occurring in the auto industry did not happen. These analyses of firm-level data have two important implications for studies of employment in the 1930s. First, aggregate demand shocks could very well have changed average technological practice through the process of exit and entry at the firm level.Thus Bernanke and Parkinson's reject ion of the technological shocks explanation of short-run increasing returns, which is based in part on their belief that aggregate demand shocks did not alter industry production functions, may be premature. Second, the empirical adequacy of the representative firm paradigm is apparently industry-specific, depending on industry structure, the nature of product demand, and initial (that is, pre-Depression) heterogeneity in firm sizes and costs.Such â€Å"phenomena are invisible in industry data,† and can only be recovered from firm-level records, such as the census manuscripts. Analyses of industry and firm-level data are one way to explore heterogeneity in labor utilization. Geography is another. A focus on national or even industry aggregates obscures the substantial spatial variation in bust and recovery that characterized the 1930s. Two recent studies show how spatial variation suggests new puzzles about the persistence of the Depression as well as provide additional degre es of freedom for discriminating between macroeconomic models.State-level variation in employment is the subject of an important article by John Wallis. Using data collected by the Bureau of Labor Statistics, Wallis has constructed annual indices of manufacturing and nonmanufacturing employment for states from 1930 to 1940. Wallis' indices reveal that declines in employment between 1930 and 1933 were steepest in the East North Central and Mountain states; employment actually rose in the South Atlantic states, however, once an adjustment is made for industry mix.The South also did comparatively well during the recovery phase of the Depression (1933-1940). Wallis tests whether the southern advantage during the recovery phase might reflect lower levels of unionization and a lower proportion of employment affected by the passage of the Social Security Act (1935), but controlling for percent unionized and percent in covered employment in a regression of employment growth does not elimina te the regional gap. What comes through clearly,† according to Wallis â€Å"is that the [employment] effects of the Depression varied considerably throughout the nation† and that a convincing explanation of the South-nonSouth difference remains an open question. Curtis Simon and Clark Nardinelli exploit variation across cities to put forth a particular interpretation of economic downturn in the early 1930s. Specifically, they study the empirical relationship between â€Å"industrial diversity† and city- level unemployment rates before and after World War Two.Industrial diversity is measured by a city-specific Herfindahl index of industry employment shares. The higher the value of the index, the greater is the concentration of employment in a small number of industries. Using data from the 1930 federal census and the 1931 Special Census of Unemployment, Simon and Nardinelli show that unemployment rates and the industrial diversity index were positively correlated across cities at the beginning of the Depression.Analysis of similar census data for the post-World War Two, period, reveals a negative correlation between city unemployment rates and industrial diversity. Simon and Nardinelli explain this finding as the outcome of two competing effects. In normal economic circumstances, a city with a more diverse range of industries should have a lower unemployment rate (the â€Å"portfolio† effect), because industry-specific demand shocks will not be perfectly correlated across industries and some laid-off workers will find ready employment in expanding industries.The portfolio effect may fail, however, during a large aggregate demand shock (the early 1930s) if firms and workers are poorly informed, misperceiving the shock to be industry-specific, rather than a general reduction in demand. Firms in industrially diverse cities announce selective layoffs rather than reduce wages, because they believe that across-the-board wage cuts would caus e too many workers to quit (workers in industrial diverse cities think they can easily find a job in another industry elsewhere in the same city), thus hurting production.Firms in industrially specialized cities, however, are more likely to cut wages than employment because they believe lower wages â€Å"would induce relatively fewer quits† than in industrially diverse cities. Thus, Simon and Nardinelli conclude, wages in the early 1930s were more rigid in industrially-diverse cities, producing the positive correlation between industrial diversity and unemployment. Improvements in the quantity, quality, and timeliness of economic information, they conjecture, have caused the portfolio effect to dominate after World War Two, producing the postwar negative correlation.Although one can question the historical relevance of Simon and Nardinelli's model, and the specifics of their empirical analysis, their paper is successful in demonstrating the potential value of spatial data in unraveling the sources of economic downturn early in the Depression. Postwar macroeconomics has tended to proceed as aggregate unemployment rates applied to a representative worker, with a certain percentage of that worker's time not being used. As a result, disaggregated evidence on unemployment has been slighted.Such evidence, however, can provide a richer picture of who was unemployed in the 1930s, a better understanding of the relationship between unemployment and work relief, and further insights into macroeconomic explanations of unemployment. To date, the source that has received the most attention is the public use tape of the 1940 census, a large, random sample of the population in 1940. The 1940 census is a remarkable historical document. It was the first American census to inquire about educational attainment, wage and salary income and weeks worked in the previous year; nd the first to use the â€Å"labor force week† concept in soliciting information about labor f orce status. Eight labor force categories are reported, including whether persons held work relief jobs during the census week (March 24-30, 1940). For persons who were unemployed or who held a work relief job at the time of the census, the number of weeks of unemployment since the person last held a private or nonemergency government job of one month or longer was recorded.The questions on weeks worked and earnings in 1939 did not treat work relief jobs differently from other jobs. That is, earnings from, and time spent on, work relief are included in the totals. I have used the 1940 census sample to study the characteristics of unemployed workers and of persons on work relief, and the relationship between work relief and various aspects of unemployment. It is clear from the census tape that unemployed persons who were not on work relief were far from a random sample of the labor force.For example, the unemployed were typically younger, or older, than the average employed worker (u nemployment followed a U-shape pattern with respect to age); the unemployed were more often nonwhite; and they were less educated and had fewer skills than employed persons, as measured by occupation. Such differences tended to be starkest for the long-term unemployed (those with unemployment durations longer than year); thus, for example, the long-term unemployed had even less schooling that the average unemployed worker.Although the WPA drew its workers from the ranks of the unemployed, the characteristics of WPA workers did not merely replicate those of other unemployed persons. For example, single men, the foreign-born, high school graduates, urban residents, and persons living in the Northeast were underrepresented among WPA workers, compared with the rest of the unemployed. Perhaps the most salient difference, however, concerns the duration of unemployment. Among those on work relief in 1940, roughly twice as many had been without a non-relief job for a year or longer as had u nemployed persons not on work relief.The fact that the long-term unemployed were concentrated disproportionately on work relief raises an obvious question. Did the long-term unemployed find work relief jobs after being unemployed for a long time, or did they remain with the WPA for a long time? The answer appears to be mostly the latter. Among nonfarm males ages 14 to 64 on work relief in March 1940 and reporting 65 weeks of unemployment (that is, the first quarter of 1940 and all of 1939), close to half worked 39 weeks or more in 1939. Given the census conventions, they had to have been working more or less full time, for the WPA.For reasons that are not fully clear, the incentives were such that a significant fraction of persons who got on work relief, stayed on. One possible explanation is that some persons on work relief preferred the WPA, given prevailing wages, perhaps because their relief jobs were more stable than the non-relief jobs (if any) available to them. Or, as one WP A worker put it: â€Å"Why do we want to hold onto these [relief] jobs? †¦ [W]e know all the time about persons †¦ just managing to scrape along †¦ My advice, Buddy, is better not take too much of a chance. Know a good thing when you got it. Alternatively, working for the WPA may have stigmatized individuals, making them less desirable to non-relief employers the longer they stayed on work relief. Whatever the explanation, the continuous nature of WPA employment makes it difficult to believe that the WPA did not reduce, in the aggregate, the amount of job search by the unemployed in the late 1930s. In addition to the duration of unemployment experienced by individuals, the availability of work relief may have dampened the increase in labor supply of secondary workers in households in which the household head was unemployed, the so- called â€Å"added worker† effect.Specifically, wives of unemployed men not on work relief were much more likely to participate in the labor force than wives of men who were employed at non-relief jobs. But wives of men who worked for the WPA were far less likely to participate in the labor force than wives of otherwise employed men. The relative impacts were such that, in the aggregate, no added worker effect can be observed as long as persons on work relief are counted among the unemployed.Although my primary goal in analyzing the 1940 census sample was to illuminate features of unemployment obscured by the aggregate time series, the results bear on several macroeconomic issues. First, the heterogenous nature of unemployment implies that a representative agent view of aggregate unemployment cannot be maintained for the late 1930s. Whether the view can be maintained for the earlier part of the Depression is not certain, but the evidence presented in Jensen and myself suggests that it cannot.Because the evolution of the characteristics of the unemployed over the 1930s bears on the plausibility of various macro economic explanations of unemployment (Jensen's use of efficiency wage theory, for example), further research is clearly desirable. Second, the heterogenous nature of unemployment is consistent with Lebergott's claim that aggregate BLS wage series for the 1930s are contaminated by selection bias, because the characteristics that affected the likelihood of being employed (for example, education) also affected a person's wage.Again, a clearer understanding of the magnitude and direction of bias requires further work on how the characteristics of the employed and unemployed changed as the Depression progressed. Third, macroeconomic analyses of the persistence of high unemployment should not ignore the effects of the WPA — and, more generally, those of other federal relief policies — on the economic behavior of the unemployed. In particular, if work relief was preferred to job search by some unemployed workers, the WPA may have displaced some growth in private sector emplo yment that would have occurred in its absence.An estimate of the size of this displacement effect can be inferred from a recent paper by John Wallis and Daniel Benjamin. Wallis and Benjamin estimate a model of labor supply, labor demand, and per capita relief budgets using panel data for states from 1933 to 1939. Their coefficients imply that elimination of the WPA starting in 1937 would have increased private sector employment by 2. 9 percent by 1940, which corresponds to about 49 percent of persons on work relief in that year. Displacement was not one-for-one, but may not have been negligible.My discussion thus far has emphasized the value of disaggregated evidence in understanding certain key features of labor markets in the 1930s — the behavior of wages, employment and unemployment — because these are of greatest general interest to economists today. I would be remiss, however, if I did not mention other aspects of labor markets examined in recent work. What follow s is a brief, personal selection from a much larger literature. The Great Depression left its mark on racial and gender differences.From 1890 to 1930 the incomes of black men increased slightly relative to the incomes of white men, but the trend in relative incomes reversed direction in the 1930s. Migration to the North, a major avenue of economic advancement for Southern blacks, slowed appreciably. There is little doubt that, if the Depression had not happened, the relative economic status of blacks would have been higher on the eve of World War Two. Labor force participation by married women was hampered by â€Å"marriage bars†, implicit or explicit regulations which allowed firms to dismiss single women upon arriage or which prohibited the hiring of married women. Although marriage bars existed before the 1930s, their use spread during the Depression, possibly because social norms dictated that married men were more deserving of scarce jobs than married women. Although the y have not received as much attention from economists, some of the more interesting effects of the Depression were demographic or life-cycle in nature. Marriage rates fell sharply in the early 1930s, and fertility rates remained low throughout the decade.An influential study by the sociologist Glen Elder, Jr. traced the subsequent work and life histories of a sample of individuals growing up in Oakland, California in the 1930s. Children from working class households whose parents suffered from prolonged unemployment during the Depression had lower educational attainment and less occupational mobility than their peers who were not so deprived. Similar findings were reported by Stephan Thernstrom in his study of occupational mobility of Boston men.The Great Depression was the premier macroeconomic event of the twentieth century, and I am not suggesting we abandon macroeconomic analysis of it. I am suggesting, however, that an exclusive focus on aggregate labor statistics runs two risk s: the facts derived may be artifacts, and much of what may be interesting about labor market behavior in the 1930s is rendered invisible. The people and firms whose experiences make up the aggregates deserve to be studied in their diversity, not as representative agents.I have mentioned census microdata, such as the public use sample of the 1940 census or the manufacturing census manuscripts collected by Bresnahan and Raff, in this survey. In closing, I would like highlight another source that could be examined in future work. The source is the â€Å"Study of Consumer Purchases in the United States† conducted by the BLS in 1935-36. Approximately 300,000 households, chosen from a larger random sample of 700,000, supplied basic survey data on income and housing, with 20 percent furnishing additional information.The detail is staggering: labor supply and income of all family members, from all sources (on a quarterly basis); personal characteristics (for example, occupation, age , race); family composition; housing characteristics; and a long list of durable and non-durable consumption expenditures (the 20 percent sample). Because the purpose of the study was to provide budget weights to update the CPI, only families in â€Å"normal† economic circumstances were included (this is the basis for the reduction in sample size from 700,000 to 300,000).Thus, for example, persons whose wages were very low or who experienced persistent unemployment are unlikely to be included in 1935-36 study. A pilot sample, drawn from the original survey forms (stored at the National Archives) and containing the responses of 6,000 urban households, is available in machine-readable format from the Inter-University Consortium for Political and Social Research at the University of Michigan (ICPSR Study 8908). Robert A. Margo Vanderbilt University

Tuesday, October 22, 2019

Through a close analysis of The Crying Game, Essay Example

Through a close analysis of The Crying Game, Essay Example Through a close analysis of The Crying Game, Paper Through a close analysis of The Crying Game, Paper ‘woman’ , ‘masculinity’ and ‘femininity’ . The undermentioned analysis seeks to demo how Butler’s ideas managed to pervade Jordan’s movie, which is – it should be noted – a much more complex film than a mere survey of gender issues. First, nevertheless, a definition of the ‘performativity’ of gender must be attempted so as to set up a conceptual model for the balance of the treatment. Judith Butler’s theory on gender should be interpreted within the broader societal and political context of feminist theory that came in two distinguishable ‘waves’ during the 1960’s and the 1970’s. After procuring the needed political accomplishments gained by the progresss of the first moving ridge, the 2nd, more radicalised moving ridge of feminism sought to dispute historical impressions of adult male and adult female in western society, â€Å"which maintains male laterality by co?opting adult females and stamp downing the feminine. These statements link dominant western signifiers of reason with male power and control over adult females and nature, which is associated with force, subjugation and destruction.† [ 1 ] Therefore, while Butler’s positions are undoubtedly radical, they should besides be read within this dominant feminist clime of deep?seated alteration that characterised the 2nd half of the 20th century in the West, which sought to intentionally make divisions between heterosexual work forces and heterosexual adult females in order to foster the womens rightist cause. This is besides the ground behind the confederation between extremist feminism and the homosexual and sapphic communities, which was forged at this clip and which is straight relevant to the performativity of gender as seen inThe Crying Game. Butler’s positions pervert from the feminist norm with respects to the manner in which she formulates the thought of holding to ‘perform’ the parts of adult male and adult female in modern-day society. In this sense, she sees both maleness and muliebrity as being manufactured by civilization and she workss the thought that if this civilization were struct ured along less visibly male?female lines, so the two genders would act in a discernibly different mode. This is the thought which is used inThe Crying Gameto which attending must now be turned. The Crying Gameis a film that is every bit much about the Troubles of the IRA as it is a movie about trans?gender analysis. The secret plan concerns the karyon of a little set of Irish terrorists who kidnap a British soldier ( Forest Whitaker ) for the intent of interchanging him in order to procure the release of confined IRA secret agents in UK gaols. The pack is led by Maguire ( Adrian Dunbar ) and besides contains Jude ( Miranda Richardson ) and Fergus ( Stephen Rea. ) It is the character of Fergus who will go the chief focal point of the movie as first he finds himself unable to the kill the British soldier, Jody and later he embarks upon detecting the dead man’s lover, Dil ( Jaye Davidson ) to whom he finds himself instantly attracted. This burgeoning relationship between Fergus and Dil is fraught with tenseness as Fergus feels tortured by guilt for the decease of Jody ( although Fergus lets him travel, the soldier is still by chance killed by a British armored combat ve hicle ) . This tenseness is an indispensable cinematic precursor to the movie’s cardinal secret plan turn, which comes as a major surprise to the sing audience. Before traveling towards a critical assessment of the disclosure that occurs within the relationship of Dil and Fergus, reference must be made of the manner in which Neil Jordan manages to work the traditional impressions of adult female in movie. By picking an androgynous looking histrion to play Dil, the manager tricks the audience into believing a traditional heterosexual relationship between a adult male and a adult female is about to take topographic point – a relationship rendered tragic by the loss both characters have already suffered. This yoke, in movie history, has normally seen the adult male scoring the adult female who acts as the aesthetically beautiful centerpiece of the action. â€Å"In the synthetic whorehouse of the film, where the ware may be eyed infinitely but neer purchased, the tenseness between the beauty of the adult female, which is admirable, and the denial of the gender which is the beginning of that beauty but is besides immoral, reaches a perfect impasse.† [ 2 ] Therefore, when it easy transpires that Dil is non yet another illustration of the cinematic female beauty but is in fact a adult male, the sense of daze is all the more marked. As with Butler’s thought on the performativity of gender, Jordan stops abruptly of saying this development as a fact ; alternatively, it is left unfastened to speculate as a philosophical inquiry: does Dil’s biological science mean that he is a adult male no affair what or does the fact that he has assumed a female function mean that he has transgressed the gender divide to go a adult female in the cultural sense? This is a cardinal line of enquiry in extremist womens rightist political orientation and 1 that has no direct reply. For case, although diehards would reason that no?one can of all time change by reversal the gender of their birth progressives would likewise province that gender is a concept of society and that both males and females should be freely able to take non merely their gend er but besides their gender. This is a direct descendant of Judith Butler’sGender Troublewhere the writer argues the instance that work forces and adult females both perform the functions of masculine and feminine without of all time oppugning its cogency in this manner. â€Å"Gender is †¦ a building that on a regular basis conceals its generation ; the silent corporate understanding to execute, bring forth and prolong distinct and polar genders as cultural fictions is obscured by the credibleness of those productions – and the penalties that attend non holding to believe in them.† [ 3 ] Fergus’ response to the realization that Dil is a cross-dresser is typically male and typical of society’s general horror at such evildoings of gender and gender. His first response is to plug Dil in the face and abjure his old statements of fondness. He exits the scene, go forthing Dil lying bloodied on the floor. Fergus’ disgust is mirrored in the daze felt by the modern-day film audience, which was manifested in mass protests from Christian and conformist communities when the movie was released both in the UK and abroad. The manager makes certainly non to over or under dramatise the disclosure of Dil’s evildoing of gender, preferring alternatively to allow the balance of the secret plan play out to the background of the daze of the ongoing relationship between the two chief characters. With the apparition of the IRA out of the blue re?appearing towards the terminal of the movie, the audience is transported off from the impression of the performativity of gender to see how Fergus is able to lift above his initial feeling of disgust to salvage Dil from prison after the shot of Fergus’ old companion, Jude. Interestingly, Dil is compelled to slay Jude when it transpires that she had enjoyed a sexual relationship with Jody while the soldier was in her imprisonment. Therefore, there is no uncertainty that – after all that has transpired – Dil still identifies herself as a adult female and is straight challenged by the more evidently feminine Jude. At this point, reference must b e made of the difference between Butler’s impression of the performativity of gender and the sort of transgender constructs encapsulated in retarding force and cross?dressing. â€Å"In the bulk of the plants that have followed in Butler’s aftermath, retarding force ( as the parodic passage of gender ) is represented as something one can take to make: the imputation is that one can be whatever type of gender one wants to be, and can execute gender in whatever manner one illusion. This is what you might name a voluntarist theoretical account of individuality because it assumes that it is possible to freely and consciously make one’s ain individuality. Whilst in many ways this voluntarist history of gender public presentation is in direct contrast with Butler’s impression of performativity, it is besides, at least in portion, a effect of the ambiguity of Butler’s ain history of the differentiation between public presentation and performativity inGender Trouble.† [ 4 ] Appropriately, Neil Jordan neer alludes to whether or non Dill is voluntarily offending gender or whether it is a biological necessity for adult male to hold morphed into adult female. This mirrors Butler’s ambiguity and the ambiguity that pervades every facet of the impression of traversing gender, which is one of the more intellectually ambitious constructs for any society to cope with. Ultimately, though,The Crying Gameterminals with a intimation of the director’s positions on the topic. During the concluding scene, which is set old ages later, Dil asks Fergus why he took the incrimination for her. Telling an earlier scene, Fergus answers, â€Å"It’s in my nature.† This implies that there is no pick with respects to gender, gender and public presentation. We are what we are. Decision The Crying Gameis a ambitious movie that operates on a assortment of degrees. Politicss, race and gender are all topic to scrutiny without being dealt with in a moralistic manner. Judith Butler’s impression refering to the performativity of gender is similarly a multifaceted survey that has greatly influenced feminist political orientation and has clearly infiltrated the head of manager Neil Jordan. In the concluding analysis, there can be no uncertainty that there is a strong nexus between the two without any simple, broad?based decision being put frontward by either party. In both cases, it is left up to the reader and spectator to do their heads up refering gender and the wider issue of whether it is nature that constructs our sexual being or whether it is cultural fostering that subconsciously encourages us to play the functions of heterosexual work forces and adult females. This is a hard reasonable equilibrating act to keep, yet it is besides finally reasonable as bothTh e Crying GameandGender Troublearrive at the sentiment that there can be no one tax write-off that manages to fulfill everyone. The decision, like the pick of gender and gender, must in the terminal be entirely subjective. Bibliography Butler, J. ( 1990 )Gender Trouble: Feminism and the Subversion of IdentityLondon: Routledge Carter, A. ( 1978 )The Saideian Woman and the Ideology of PornographyNew York: Harper A ; Row Featherstone, M. ( Ed. ) ( 2000 )Body AlterationLondon: Sage Shaviro, S. ( 1993 )The Cinematic Body: Theory out of Bounds, Volume 2Minneapolis and London: University of Minnesota Press Stallybrass, P. and White, A. ( 1986 )The Politics and Poetics of TransgressionLondon: Routledge Sullivan, N. ( 2003 )A Critical Introduction to Queer TheoryEdinburgh: Edinburgh University Press Weedon, C. ( 1987 )Feminist Practice and Poststructuralist TheoryLondon and New York: Blackwell Movies The Crying Game( Neil Jordan ; 1992 )

Monday, October 21, 2019

What is the ACT A Complete Explanation of the Test

What is the ACT A Complete Explanation of the Test SAT / ACT Prep Online Guides and Tips If you’ve found this article, you’ve probably vaguely heard of the ACT (and if you hadn’t before, well, you have now!). Maybe you have some idea that it has something to do with college, but you’re still pretty confused about what exactly it is. I’m here to help! The ACT, like the SAT, is a standardized test used for college admissions. If you’re planning to apply to college in the US you’ll almost certainly have to take one of these tests (and you may need to even if you’re planning on going to school elsewhere). This post will take you through everything need to know about the ACT- from why students take it to what it covers to when you should plan to take it yourself. Why Do People Take the ACT? The ACT is a standardized test designed to show colleges how prepared you are for higher education by measuring your reading comprehension, knowledge of writing conventions, and computational skills and then comparing youwith the rest of the high schoolers who take it. It essentially serves as a nation-wide college admission test (though it's far from the only factor schools consider). Most four-year schools require applicants to submit either ACT or SAT scores (they don't distinguish between the two), which can then make up as much as 50% of the admission decision. A strong standardized test score is a key part of your application. There are also a lot of students who are required to take the ACT by their high school. A number ofstates use the ACT as a state-wide assessment test, so every junior at a public school takes the ACT. Which Schools Accept the ACT? There's a common misconception that some colleges only accept SAT scores and won't take ACT scores. This is not the case: all four-year colleges and universities in the US accept ACT scores, and the schoolsdon't distinguish between the two tests. You can takewhichever you prefer. However, there are a few schools, including George Washington University, Hampshire College, and California State University, that either don't require ACT or SAT scores or have flexible policies on standardized tests. If you're an international student looking to attend a U.S. school, you will need to take either the ACT or the SAT. If you're an American student planning to apply to international schools, you will probably still need to take one of these standardized tests, but it will depend on the school you're applying to and which country it's in. Two-year colleges and trade schools generally don't require applicants to take the ACT but will sometimes accept it in lieu of a placement test. MIT, one of the many colleges that requires an ACT score. What Does the ACT Cover? The ACT consists of four sections- English, Math, Reading, and Science- plus an optional writing test.With the exception of the writing section, the test is entirely multiple choice: the math questionshavefive answer choices and the others all have four. The chart quantifiesthe basic structure of the test (the sections are in the same order they appear on the test).For more details on what's actually on the ACT, you can follow the links to full breakdowns of each section. Section Questions Time English 75 questions 45 min Math 60 questions 60 min Reading 40 questions 35 min Science 40 questions 35 min Writing 1 prompt 40 min How Is the ACT Scored? ACT scores can feel arbitrary, so let's break down where that mysterious number between 1 and 36 actually comes from. For each section of the ACT, you'll get a raw score, which is the number of questions you get right. That is then converted into a scaled score between 1 and 36. The composite score is simply the average of your four section scores (the writing is left out because it's optional). In the US, the average score hovers around a 21, although there's some variance from year to year. Though it's easy to fixate on trying to get as high a score as possible, most student don't need a 36. Instead, you should determine what a good score is for the schools (and scholarships) you're planning to apply to. When Should You Take the ACT? When you take the ACT will depend on what kind of score you're looking for, when your application deadlines are, and whether you live in one of the states that require it. Generally speaking, however, the ideal time to take the ACT for the first time is thewinter of your junior year- when you’ve covered most of the material in school but still have time to take it again ifyou want to. We're just getting started! Time to make a plan. Everything You Need to Plan for the ACT Having read this post, you hopefully feel a bit more clear about what the ACT is. But the tricky part is still to come: preparing for the test. I've compiled a list of the key questions you should ask yourself as you begin to plan for college applications. Should I take the ACT or SAT? This question concerns a lot of students, but it's not as important as it seems, since most students don't see that much of a difference between their scores on the two tests. The New SATis especially similar to the ACT. If you'renot sure which test to take, you can use our fool-proof method to determine which test is better for you or try our quiz fordeciding between the new SAT vs. the ACT. Also keep in mind that if you'll be taking the ACT in school anyways, it will be simplerto stick with that test, since you may have some prep lessons in class and it will save you money on registration. What ACT score do I need to get into college? Your score goal will depend on which schools you want to apply to. Use this form to calculate your ideal ACT score. What's the best way for me to prepare for the ACT? As you prepare for the ACT, you'll need to decide whether you want to hire a tutor or study on your own. You may also want to consider an online program like PrepScholar! If you decide to study on your own, make sure you get the best book for your needs. Whatdo I need to know to prepare for the ACT? There are three key pieces to preparing for the ACT: understanding how the test works, reviewing the material, and practicing. To get a sense of how to think effectively about the ACT, download our guide to the 5 strategies that you must use. For specifics on content and question types, try our complete guides to each section of the test: English, math, reading, and science. You can find the best ACT practice tests here and an in depth guide on how to use them here. Disappointed with your ACT scores? Want to improve your ACT score by 4+ points? Download our free guide to the top 5 strategies you need in your prep to improve your ACT score dramatically.