Spoken Interview



Spoken Interview
A.    Mode
·                Question and Requests for Information
Yes/ No questions should generally be avoided, except perhaps at the very beginning of the interview, while the candidate is still warming up. Performance of various operations (of the kind listed in the two sets of specifications above) can be elicited through requests of the kind :
   “Can you explain to me how/why … ?” and
   “Can you tell me what you think of … ?”

Requests for elaboration : such as “What exactly do you mean ?” , “Can you explain that in a little more detail ?”, “What would be a good example of that ?”
Appearing not to understand “I’m sorry but I don’t quite follow you.”
Invitation to ask questions : “Is there anything you’d like to ask me ?”
·                Pictures
Single pictures are particularly useful for eliciting descriptions. Series of pictures (or video sequences) form of natural basis for narration.
http://year4holbrookps.files.wordpress.com/2012/07/butterfly.jpg?w=1200
One common stimulus material could be a series of pictures showing a story, where the testee should describe. It requires the testee to put together a coherent narrative.
series of picture.jpg
However, there is a problem in using visual stimulus in testing speaking, it lies in that the choice of the materials used must be something that all the testees can interpret equally well, since if one testee has a difficulty understanding the visual information, it will influence the way he/she is evaluated (Kitao & Kitao, 1996).   
·         TOEFL Way
      The purpose of the TOEFL test is to evaluate the English proficiency of people whose native language is not English. The TOEFL scores are primarily used as a measure of the ability of international students to use English in an academic environment.
               The Speaking section measures test takers’ ability to speak English effectively in educational environments, both inside and outside of the classroom. The Speaking section consists of six tasks: Two of these tasks are independent; that is, test takers receive no oral or written test materials. On this task, test takers respond to a relatively general question on a familiar topic. The other four tasks assess integrated skills. On two of these tasks, test takers respond to both an oral and a written stimulus; in the other two integrated tasks, they respond to an oral stimulus. The tasks follow this format:
Independent Speaking Tasks
For these two questions, test materials are designed so as not to constrain examinee responses. On one task, test takers respond to a question concerning a personal preference. On the other task, they answer a question that asks them to make a choice.
Four Integrated Speaking Tasks
These tasks assess integrated skills, requiring test takers to respond orally both to oral and to written stimuli. The types of integrated tasks are as follows:
Read/Listen/Speak (Campus situation). Test takers read a passage, listen to a speaker express an opinion about the passage topic, and then give an oral summary of the speaker’s opinion.
Read/Listen/Speak (Academic course topic). Test takers read a passage that broadly defines a term, process, or idea from an academic subject. They then listen to a lecture that provides specific examples to illustrate the term, process, or idea expressed in the reading passage. Test takers then respond orally, combining and conveying important information from both the reading passage and the lecture.
Listen/Speak (Campus situation). Test takers listen to a conversation about a student-related problem and two possible solutions. Test takers must demonstrate understanding of the problem and orally express an opinion about the best way to solve it.
Listen/Speak (Academic course topic). Test takers listen to an excerpt from a lecture that explains a term or concept and gives concrete examples to illustrate it. Test takers must then orally summarize the lecture and demonstrate their understanding of how the examples relate to the overall topic.

·                Interpreting
It’s not intended that candidates should be able to act as interpreters (unless that is specified). However simple interpreting tasks can test both production and comprehension in a control led way.
Situation of the following kind can be set up :
The native language speaker wants to invite a foreign visitor to his or her home for a meal. The candidates has to convey the invitation and act as an interpreter for the subsequent exchange

Comprehension can be assessed when the candidate attempts to convey what visitor is saying. The limitation is difficult to obtain sufficient information on candidates’ power of comprehension.
·                Prepared Monologue
This technique could be appropriate in a proficiency test for teaching assistant , or in an achievement tests when the ability to make presentations is an objective of the course. The limitation of this mode is frequently misused. It should only be used where the ability to make prepared presentations is something that the candidates will need

·                Reading aloud
It is a way to test pronunciation separately from the content of speech. If it is necessary to use this method of testing, the test should at least make use of a situation where the student might actually be reading aloud, such as reading instructions or parts of a letter to another person..
The limitation is that this is not generally a good way to test speaking. Its backwash effect is likely to be harmful, and it is not a skill that is used much outside of the classroom. 


B.     TOEFL Stages
            The TOEFL test review process has three main stages: a content review, a fairness review, and an editorial review that focuses on both content and formatting. Additionally, when required, a subject matter expert checks the accuracy and currency of the content.

Content Review
At this stage, assessment specialists review stimuli and items for both language and content, considering questions such as these:
• Is the language in the test materials clear? Is it accessible to a nonnative speaker of English who is preparing to study or is studying at a university where English is a medium of instruction?
• Is the content of the stimulus accessible to nonnative speakers who lack specialized knowledge in a given field (e.g., geology, business, or literature)?
For multiple-choice questions, reviewers also consider factors such as the following:
• The appropriateness of the point tested
• The uniqueness of the answer or answers (the item keys)
• The clarity and accessibility of the language used
• The plausibility and attractiveness of distracter choices — the incorrect options
For constructed-response items (speaking, writing) the process is similar but not identical. Reviewers tend to focus on accessibility, lack of ambiguity in the language used, and on how well they believe the particular speaking or writing item will generate a fair and scorable response. It is also essential that reviewers judge each speaking or writing item to be comparable with others in terms of difficulty. Expert judgment, then, plays a major role in deciding whether a speaking or writing item is acceptable and can be included in an operational test.
This peer review process is linear; reviewers move all test materials through predetermined stages. Test materials move to the next stage of review only after a reviewer signs off, signifying approval.

Fairness Review
ETS Standards for Quality and Fairness (2002) mandates fairness reviews. This fairness review must take place before using materials in a test.
Because attention to fairness is such an integral part of the test design, all assessment specialists undergo fairness training — in addition to item writing training — relatively soon after their arrival at ETS. As part of their training to develop TOEFL test materials, item writers must become familiar with the ETS Guidelines for Fairness Review of Assessments (2009) and the ETS International Principles for Fairness Review of Assessments (2007) and use them when reviewing items and stimuli. The content review process itself, therefore, always includes fairness as an aspect of development.
In addition, specially trained and periodically calibrated fairness reviewers conduct a separate and independent review of all TOEFL test materials. TOEFL assessment specialists may not perform this official fairness review of TOEFL materials; the official fairness reviewer may be an assessment specialist who works on other ETS tests. In this way, the fairness review is more objective and the reviewer brings no sense of ownership of the test into the review. When fairness reviewers find unacceptable content in the test materials, they issue fairness challenges. The content reviewer assigned to immediately follow the fairness reviewer must resolve the challenge to the satisfaction of both reviewers. For rare cases in which the reviewers cannot reach agreement, there is a process in place known as fairness adjudication, in which a panel that includes the content and fairness reviewers adjudicates the issues at hand and comes to a resolution.
Validity concerns underlie all aspects of fairness review. To ensure the validity of a test, it is paramount that only construct-relevant factors affect test takers’ scores. The construct can be defined as all of the knowledge, skills, and abilities that a test is supposed to measure. A primary goal of fairness review, then, is to identify and reduce construct-irrelevant aspects of stimuli or items that might hinder test-taker performance or even, in construct-irrelevant ways, enhance test-taker performance. Minimizing the influence of construct-irrelevant test content enhances fairness and thus also the validity of test scores.

Editorial Review
All TOEFL test materials receive an editorial review. This review’s purpose is to ensure that language in the test materials (e.g., usage, punctuation, spelling, style, and format) is as clear, concise, and consistent as possible. Editors ensure that established ETS test style is followed. In addition, when warranted, editors check facts in stimuli for accuracy or to ensure that the stated facts are currently true; in areas such as physics or geography, for example, changes in facts occur periodically.








·          
C.    Pragmatics
A.                 The Billingual Syntax Measure
            Its cartoon drawings were naturally motivating to preschooler and children in the early grades. Compare for instance the picture 2 with its drab counterpart picture 1. In picture 1, the intent of the picture displayed is to elicit one-word name of object. There is something unnatural about telling the examiner the name of an object when it is obvious to the child that the examiner already knows the correct answer. By contrast the question ‘How come he is so skinny?’ ( while the examiner is pointing to the skinny man in picture 2) requires an inference that a child is usually elated to be able to make. It makes sense in relation to a context of experience that the child can relate to. It has pragmatic point. Whereas, question like ‘what’s this a picture of?’ have considerably less pragmatic appeal. They do not elicit speech in relation to any meaningful context of discourse.
                                    
                                    Picture 1                                              picture 2

                        The questions ask in relation to the series of picture that comes at the end of the BSM suggest possibilities for elicitation of meaningful speech. In picture below, where all three of the partinent pictures are displayed, in the first picture the King is about to take a bite out of a drumstick. In the same picture the little dog to his left is eyeing the fowl hungrily. In the next picture, while the King turns to take some fruit off a platter the dog makes off with the bird. In picture 3 the King drops the fruit and with eye is licking the olde chops. The story has point. It is rich in linguistic potential for elicitng speech from children.
Paraphrases of the questions asked in the BSM
  1. The examiner points to the first picture in the sequence (pic 5) and ask the child to point out the King
  2. Then the child is asked to point to the dog in the second picture (pic 6)
  3. Next, the last picture (pic 7) is indicated and again the child is asked to point out the King.
  4. The first scored question in relation to these pictures asks why the dog is looking at the King (pic 5)
  5. What happened to the King’s food?
  6. What would have happened if the dog hadn’t eaten the food?
  7. What happened to that apple? Examiner points to the third picture
  8. Finally the child is asked why the apple fell
In order to protect the security of the test, only questions 5, 6, and 7 above are given in the exact form in which they appear in the BSM.
There are other possibilities, however
For example, in response to question 5 the child might say something like, ‘The king ate it all up’ which is by their scoring syntactically correct but is pragmatically inaccurate. Or he might say, ‘The dog eated it’ where the syntactic form is not quite right but the pragmatic sense is accurate. It tends to be correct if the syntactic scoring where any  grammatically correct though the child’s response seems strange.
Better still, it would be possible to ask questions that are designed to elicit more complex pragmatic mappings. For example, the first three questions might be kept as warm-ups and followed by
A.            What is the King getting ready to do? (Examiner is pointing to picture one)
B.            What is the dog doing? (Examiner points to one)
C.            What is the dog gonna do? (examiner points to two)
D.            Why didn’t the dog just take the food in this picture? (Examiner points back to  one)
E.             What is the King doing in this picture? (Examiner points to two)
F.             What is the King so surprised about in this picture? (picture three)
G.            Why do you think the dog is winking his eye in this picture? (picture three)
H.            What do you think would have happened if the King had kept his eye on his food in this picture? (picture two)

 
Picture 1                                              picture 2                                  picture 3
The Ilyin Oral Interview
Ilyin interview is the more typical and the more pragmatically oriented than Upshur.  A page from the student’s test booklet used during the interview is displayed as picture below. The pictures attempt to summarize several days’ activities in terms of major sequences of events on those days.
 
Ilyin explain in the Manual (1976) that the examinee is to relate to the pictures in terms of whatever day the test is actually administered on. The point is not to recommend the particular procedures of Ilyin’s test, but rather to show how the technique could be adapted easily to a wide variety of approaches.
The examinee knows how the procedure works and understands the meaning of the time slots referred to by the separate picture. For instance, ‘what is the man in the picture doing right now?’ it’s about 10:00 am. An appropriate response might be: ‘He’s  in class taking notes while the professor is writing on the blackboard.’ From there, more complex questions can be posed by either looking forward in time to what the pictured person, say Bill (the name offered by Ilyin) is going to do, or what he has already done. For instance, we might ask:
1.      What was Bill doing at 7:15 this morning?
2.      Where is he going to be at lunch time?
3.      Where was he last Sunday at 7:45 in the morning?
It is possible to follow a strategy in the construction of such tasks of working outward from the present moment either toward the past or toward the future. One might follow the strategy of chronologically ordered questions that generate something like a narrative with past to present to future events guiding the development. On the other hand, one opts for one of the more pragmatic strategies, say of merely following of events in the series, asking simpler questions at the beginning and more complex one later in the series. An example of a relatevely simple question would be: where is Bill now? It’s 10:00. A more complex question would be: what was Bill doing yesterday at 10:25?
Clearly the technique could be modified in any number of ways to create more difficult or less difficult task, e.g story retelling task, to make it simpler by asking appropriate leading questions.
Upshur Spoken Communication Test
Upshur and his collaborators set up a test to assess productive communication ability as follows:
  1. Examinee and examiner are presented with four pictures which differ in certain crucial respects on one or more ‘conceptual dimensions’
  2. The examinee is told which of the four he is to describe to the examiner.
  3. The examinee tells the examiner which picture to mark and examiner makes a guess.
The number of hits, that is, correct guesses by the examiner is the score of the examinee.

1 comments:

Unknown said...

Thank you for sharing the tip with us about posting the blogs on medical related topics…
Custom Writing

Post a Comment