em mohamed

em mohamed @em_mohamed

عضوة شرف في عالم حواء

سؤال عن TSE-P

ملتقى الأحبة المغتربات

السلام عليكم
هل يوجد من لديها فكره عن امتحان

(Test of Spoken English for Professionals (TSE-P

وما هى طريقته ونوعية الاسئله فيه?
7
1K

يلزم عليك تسجيل الدخول أولًا لكتابة تعليق.

تسجيل دخول

حلااااااا
حلااااااا
مرحبا اختي
ماعندي اي فكره
اسمحي لي على التقصير

:26:
طمووحه
طمووحه
sorry it's first time i hear about it


may be you can research in the net

in sha'a alla you'll see every thing
رغد الحياة
رغد الحياة
والله ماكان عندي فكرة عنه بس عملت لك بحث وهذا اللي وصلت له

TSE measures the ability of nonnative speakers to speak in an academic or professional environment. It provides accurate, valid, and reliable assessments for candidate for graduate assistantships, employent, or licensure and certification.
وهذا موضوع مفصل عن هالامتحان اجلسي على مهلك اقريه شكله محشي معلومات مهمه انا قريت مقاطع وحسيته مهم .. الله يعينك على مهلك روقي واقرأيه كويس..


Test of Spoken English (TSE)

Educational Testing Service. (2001).
Princeton, NJ: Author.

Reviewed by Karen Caldwell and Carolyn Samuel, OISE/UT

Introduction


The Test of Spoken English (TSE) is, in the words of Educational Testing Service (ETS), a test of speaking ability designed to evaluate the oral language proficiency of non-native speakers of English who are at or beyond the postsecondary level of education. ... used in conjunction with other measures, it can help provide an indication of the examinee's ability to successfully communicate in English in an academic or professional setting. (ETS, cited in Powers et al., 1999, p. 400)


The primary intended uses are screening graduate teaching assistants and certifying the speaking ability of medical and health care professionals (Powers, Schedl, Wilson Leung, & Butler, 1999). Its purposes are: (a) normative proficiency measurement, in that the examinee knows his/her standing in relation to others in terms of a percentile ranking (TSE 2000-2001 Information Bulletin, 2000), and (b) selection in that the institution is advised of the examinee's level of oral proficiency as represented by his/her test score. Test samples and information regarding score interpretation are published in the ETS Standard Setting Kit (1995).


Historical overview


In the early 1960s, development of the TOEFL arose from a perceived need to test the listening and reading comprehension, as well as writing ability, of foreign students wishing to study in the United States. The measurement of speaking ability would be left for later experimental work (Spolsky, 1995). At the same time, rapid growth in the computer industry (Spolsky, 1995) was leading native speakers to forego graduate studies in favour of high salaried positions in this field. This contributed to a shortage of graduate teaching assistants and the subsequent hiring of large numbers of foreign applicants (Spolsky, 1995). Many of these foreign graduate assistants lacked the ability to communicate effectively in English, and complaints by their students eventually ensued. These were relayed to ETS (Spolsky, 1995).

In the early 1970s, the Peace Corps recruited and trained ETS staff members, who had previously been trained by the Foreign Service Institute (FSI), to assist with the training of local volunteers (Clark & Clifford, 1988). Around that time, ETS also began experimental study of direct testing of speaking in collaboration with the Peace Corps (Spolsky, 1995).

By the mid-1970s, ETS became formally involved in studies to investigate the technical and psychometric properties of the oral interview. A 1976 study began with the assumption that face-to-face testing would be precluded by the cost and complexity of administration at the hundreds of worldwide TOEFL testing sites and therefore determined that the format would be an audio-recorded test supplemented by a printed test booklet (i.e., the semi-direct method) (Spolsky, 1995). In this study, trained raters interviewed 86 foreign students over three days. Audio-recorded long (20 minutes) and short (five minutes) interviews were conducted with each candidate by different raters who completed a written rating immediately afterward. Two weeks later, each of the interviewers re-rated the interviewees based on the recordings. It was found that later ratings of both the long and short interviews had a high correlation with the initial ratings (Spolsky, 1995).

Based on the findings of this study, namely that the semi-direct method was reliable, Clark and Swinton (cited in Spolsky, 1995) later produced and tried experimental forms of an English speaking test which would be the precursor to the TSE. The final form consisted of six sections: autobiographical warm-up, paragraph to be read aloud, sentence completion tasks, picture sequence description, multiple questions about a single picture, and interview type questions. Four scores were reported: overall comprehensibility, pronunciation, grammar, and fluency. Total test time was 15 minutes. The original items were selected from a variety of potential item types, largely on the basis of their correlations with the total score on the FSI interview (Powers et al., 1999).

As trials, revisions, and new test forms were administered (1979-1980), an increasing number of institutions became interested in the test. As a result, the TSE came to be offered 12 times per year instead of the initial five (Spolsky, 1995). Development concerns have centred on achieving maximum reliability through strict rater training and selection and on reducing costs. With respect to the latter, Bejar (1985, cited in Spolsky, 1995) conducted a study to 'provide information about the feasibility of reducing scoring costs by using one rater instead of the two that are now used for the TSE' (p. 321). It was concluded that the different standards among raters reduced not only costs but also reliability. This option was therefore discarded.

Concern still existed regarding the ability of the TSE to measure communicative competence. The main pressure for modifications came not from the language testers but from state legislators whose children could not understand their foreign teaching assistants (Spolsky, 1995). Powers et al. (1999) explain that 'recently, the TSE was revised so that it would better reflect current views of language acquisition and testing, specifically modern notions about communicative competence' (p. 400). Sections of the test deemed least communicatively oriented were deleted, and the scoring system was altered such that communicative language ability was reported on a scale. This new scale, ETS claims, reflects 'effectiveness of communication resulting from a number of linguistic, socio-linguistic, discourse and strategic competencies' (Powers et al., 1999, p. 400). Thus, the underlying construct of the revised TSE, according to ETS, is the 'ability to accomplish specific language tasks comprehensibly, accurately, coherently, and appropriately with respect to specific interlocutor/audience, topic, and purpose' (ETS, cited in Powers et al., 1999, p. 400).


Description and interpretation of the TSE


There are 12 items on the TSE, with response times ranging from 30 to 90 seconds. Total test time is approximately 20 minutes. According to Lazaraton and Wagner (1996), the functions tested include giving descriptions, narrating, summarizing information, giving directions, giving instructions, giving and supporting opinions, comparing/contrasting, and hypothesizing. An 'interactive' function is also listed. The audio recording is assessed by two raters on a five-band scale ranging from 60 (communication almost always effective) down to 20 (no effective communication). (See the TOEFL Web site for rating scale details.) Scores can be given in increments of five points because they are averaged between two raters. The score correlates with a percentile ranking; for example, a score of 45 equals a percentile rank of 52. That means that the examinee has achieved a higher score on the TSE than approximately 52% of those who took the test between 1995 and 1999 (TSE 2000 - 2001 Information Bulletin 2000).

Institutions use the test for hiring or admission requirements due to its proven reliability. In fact, studies have shown consistent reliability of the TSE in terms of inter-, intra-, and naïve native speaker reliability, as well as high correlations with other measures of oral proficiency (e.g., Powers et al., 1999; Spolsky, 1995). The minimum qualifying score is set in consultation with ETS and relevant community members. We learned this from the College of Nurses of Ontario and the College of Physicians and Surgeons of Ontario, which currently require a score of 50. It is worth noting that the TSE is only one among a variety of measurements that such institutions use to assess oral proficiency.


Authenticity of target language use tasks


Though the TSE is popularly used in health care and academic settings, there is a mismatch between the language used in these environments and the tasks on the TSE. This presents a problem with respect to authenticity and, therefore, validity. As Douglas (1997) explains, if there is to be congruence between elicitation of language performances and the interpretation of those performances, there must be congruence between the types of knowledge required by the test tasks and the types of knowledge demanded by the situation for which the test results are to be used. (p. 28)


It follows, then, that in order for such inferences to be made, a valid language test should consist of tasks whose distinguishing characteristics correspond to authentic target language (Bachman & Palmer, 1996). Van Lier (1989) lists three main preconditions for the display of oral proficiency: face-to-face interaction, decision-making opportunities, and goal-relatedness. The expense and complexity of administration of the TSE preclude the first precondition. The other two, however, are aspects of the target language use domain in which the test-takers are likely to need to use language and which, we believe, are not adequately represented in the TSE. Bachman and Palmer (1996) define a target use domain as a set of specific language use tasks that the test-taker is likely to encounter outside of the test itself and to which we want our inferences about language ability to generalize. At present, task items in the TSE do not reflect content related to either the health care profession or necessarily to teaching assistant environments.

An example from the TSE of an inauthentic and 'outside domain language' task requires the examinee to simulate a telephone call to a dry cleaner in order to persuade him/her to rush a cleaning job. In a study of native speaker responses to the TSE conducted by Lazaraton and Wagner (1996), subjects simulated the call by including a greeting, yet no formal closing such as 'good-bye.' The researchers felt that this was 'an indication that the authenticity of the task does not persist throughout the entire response' (Lazaraton & Wagner, 1996, p. 8). We wonder whether this breakdown is a result of the absence of an interlocutor (inauthentic situation) and not the examinee's inability to sustain authenticity, per se. If this were an authentic phone conversation, the examinee should then be expected to demonstrate this element by pausing to listen and by making responses such as 'uh-huh ... hmm' throughout the task. In addition, listening ability on the TSE is not taken into account on the part of either the examinee or the audience. The absence of any such interaction brings into question the interactive function as listed by Lazaraton and Wagner (1996) in the content specifications for the revised TSE. In fact, all test task instructions are delivered orally in conjunction with a written description in the test booklet. The TSE does not incorporate negotiation of meaning, which, along with interactive communication, has a significant bearing on speaker behaviour. Comments on this topic from an examinee (tutored by one of the authors in preparation for the TSE) were that speaking into a tape recorder rather than to an active listener caused concentration difficulties for him. The omission of interaction in the TSE is a serious limitation to the test's ability to measure oral proficiency.


Sociocultural bias


In order to elicit optimal samples of language for assessment, every effort should be made to design tasks with which all examinees can identify so that they can make confident, representative responses. An examinee's lack of knowledge or awareness of a context should not preclude his/her demonstration of oral proficiency. One sample test item provided by ETS in the TSE 2000-2001 Information Bulletin (2000) which asks the examinee to give an opinion highlights the issue of sociocultural bias: 'Many people enjoy visiting zoos and seeing the animals. Other people believe that animals should not be taken from their natural surroundings and put into zoos. I'd like to know what you think about this issue.' This task is problematic for several reasons. First, Douglas (1997) proposes that 'spoken academic discourse is marked by the relatively infrequent ... expression of personal attitudes and feeling' (p. 28). In the sample question, it is precisely such expression that is being elicited. Second, Lazaraton and Wagner's study (1996) found that, due to the ambiguity of the rubric, there was little consistency among native speakers: is the task to debate both sides or to argue one side? Third, while rewording the rubric could lead to a better understanding of the task itself, the context is limited to Western viewpoints of penning animals and could seriously inhibit the performance of non-native speakers unfamiliar with this issue.

Another example of sociocultural bias related to unfamiliar context is the task that asks examinees to give directions based on visual materials. Many foreign students are unfamiliar with the concept of maps indicating city blocks or grid-like plans of cities, since they come from cultures where directions are given in different ways. An examinee's unfamiliarity with the North American directions system puts him/her at an unnecessary disadvantage for completing the task. These examples illustrate that in addition to pure language ability, the examinees require supra-linguistic knowledge in order to complete these tasks.


Test preparation


Finally, there are concerns about the examinee's familiarity with the test format. Information regarding tasks, content, and strategies is readily available for test preparation. 'Teaching to the test' is thus a widespread phenomenon. One might argue that such test preparation falsely inflates skill levels, rendering the use and interpretation of the TSE of questionable validity. Others counter this argument by affirming that 'solid communication strategies can be taught and are useful for responding to tests and, more importantly, for real-life communication' (Papajohn, 2000, p. 7). Based on our teaching experience, we share Papajohn's opinion. Our students have been able to acquire test-taking skills that were subsequently transferable to spontaneous language use.


Conclusion

As a test of oral proficiency, the TSE's strength lies in its consistently proven reliability and its facility of administration. Nonetheless, the question of validity remains: Does the TSE assess that which it is intended to, that is, does it evaluate the oral language proficiency of non-native speakers of English who are at or beyond the post-secondary level of education? We have illustrated that the test does not reflect authentic target language use, that listening as a factor in oral proficiency is not taken into account, and that sociocultural biases of test tasks require supra-linguistic knowledge. These criticisms notwithstanding, institutions that use the TSE are satisfied with its measure as a predictor of oral proficiency.

However, since the target language use domain is well specified, there is opportunity to refine task types in order to elicit more authentic and relevant language. Furthermore, this would contribute significantly to the TSE being defined by a contextual language use domain rather than by one that is culturally defined. These developments could contribute to increased validity and, ultimately, a more accurate measure of oral proficiency.

واعتقد انه جزء من التوفيل لأنه يستخدم امتحان المحادثة الخاص في التوفيل .. وهذي بعض امثلة عن الاسئلة..

http://www.ets.org/tast/sample.html


وهذي معلومات ثانية عنه واللي فهمته انه امتحان غالبا مطلوب في امريكا اكثر من كندا وانا شخصيا اول مرة بسمع فيه.. شوفي الموقع قديم بس المعلومات فيه ممكن تفيدك

http://www.fulbright.org.nz/education/tse.html

بصراحه شدني هالامتحان ناوية اعرف عنه المزيد او حتى يصير لي فرصة اخذ امتحانه بس اللي فهمته انه لخريجي الجامعه يعني بعد الجامعه يؤخذ للوظيفة .. معليش لو فيها احراج بس انت من وين عرفت عنه؟؟ ولشو بدك ياه ياليت تفيديني..

بالتوفيق ..:27:
em mohamed
em mohamed
السلام عليكم
اختى حلااااااا ,اختى طموحه
شكرا انكم مريتوا بالموضوع واهى فرصه اننا نعرف شنو هالامتحان.

العزيزه رنيم




الامتحان مطلوب منى علشان التقديم للعمل as resident
باعتبارى international medical graduate
الجزء المطلوب منى هو TSE-P الخاص بfor Professionals
اكيد تعرفى ان التوفل مافيه محادثه وانهم العام الجاى حيسووها بس الان مطلوب مننا نعملوا
التوفل كتقييم تحريرى اضافه لالسبوكن انقلش كتقييم شفوى
وحتى انا اول مره اسمع بيه لما شفت المطلوب فى الابلكيشن
وبينعمل فى كندا من زمان
بس لندن ما فيها ,وانا سجلت فى مدينه ثانيه بس علشان احجز مكان لان الجماعه واخذين الاماكن لشهر خمسه

بس المضحك انى ما اعرف اى حاجه عنه الا انه امتحان شفوى ويعطوك بين 6 الى 9 اسئله فى اى موضوع وانتى احكى وصوتك يتسجل وبعدين يقيموك
(زى سوبر ستار يعنى<---------------------ما تنسوا اتصوتوا لى)
علشان كده قلت اسال لو فيه وحده دخلت شو سوت او شنو الاسئله وكيف الوضع?
بس هذه كل الحكايه.

ومشكوره على الروابط والموجود بالتوفل هو نفسه اللى سجلت عن طريقه.

سلام
رغد الحياة
رغد الحياة
وعليكم السلام والرحمة ..
ماشاء الله لاقوة الا بالله انترناشونال ميديكال ستيودينت الله يزيدك من فضله غاليتي
بصراحه عن جد هلأ شدني هالامتحان يعني شكله شغله مهمه جدا تنحط في الروزمي عندك فكرة عن اديش بيكلف الامتحان ؟؟ على كل حال خليني اسألك اخوي عنه يمكن عنده فكرة اكتر مني ..
طيب معقولة في لندن مافي اله شي مركز ؟؟ طيب في تورنتو المفروض يكون فيه وعلى فكرة حتى التوفل بتلاقي الحجز طوابيييييييييييييييير
انا بشوف انك تتصلي على رئاسة التعليم عندكم في لندن ..احنا عنا في ميسساجا اسمها بيل بورد اوف اديوكيشين وهناك بينصحوك بالمفيد لألك واكيد عندهم لست بالمراكز المتخصصه وكل اشي .. وكمان يمكن لو سألتي الواي ام سي ايه يمكن يكون عندهم فكرة ..

اتمنالك كل التوفيق يارب
فمان الله
رنيــم