COMMUNICATIVE TEST; SPEAKING, WRITING AND READING SKILL


A.    Communicative Testing

Communicative competence is the development of learners' communicative competence is defined as "expression, interpretation, and negotiation of meaning involving interaction between two or more persons or between one person and a written or oral text". The central characteristics of competence in communication are associated with:
1. The dynamic, interpersonal nature of communicative competence and its dependence on the negotiation of meaning between two or more persons who share to some degree the same symbolic system
2. Its application to both spoken and written language as well as to many other symbolic systems
3. The role of context in determining a specific communicative competence, the infinite variety of situations in which communication takes place, and the dependence of success in a particular role on one's understanding of the context and on prior experience of a similar kind
4. Communicative competence as a relative, not absolute, concept, one dependent on the cooperation of all participants, a situation which makes it reasonable to speak of degrees of communicative competence.
                Communicative testing is a learning tool, providing evaluative information to both learner and teacher. The purpose of communicative testing is to Measures learners' ability to translate their competence (or lack of it) into actual performance in 'ordinary' situations. CLT, the tests have to be communicative as well. Incommunicative language tests (CL Tests), a test has to measure the CC realized in the four language skills of listening, reading, speaking, and writing each of which is led to other skill to make the test more integrative in manner.



B.     Characteristic of Communicative Testing

 Communicative teaching uses authentic texts and situationally authentic (life-like) tasks to generate authentic communication, so whatever is communicative is authentic and the other way round. Here we cannot stop to define the concept of communicative as against authentic; I should simply point out that an authentic cloze test, as shown by the name, is an authentic task, but not necessarily a communicative one.
Brown (2005) suggests five core characteristics for designing a communicative language test. These include meaningful communication, authentic situation, unpredictable language input, creative language output, and integrated language skills (p. 21). First, the purpose of language learning is communication so language learners’ communicative ability should be measured. In other words, language tests should be based on communication that is meaningful to students and meets their personal needs. Authentic situations can help increase meaningful communication. The usefulness of authentic situations in increasing meaningful communication is emphasized by Weir (1990) when he states that, ‘language cannot be meaningful if it is devoid of context’ (p.11). By using ‘unpredicted language input’ and ‘creative language output’, Brown (2005) means that in real situations it is not always possible to predict what speakers say (unpredictable language input) so learners need to prepare for replying (creative language output). The last characteristic is integrated language skills. A communicative test should require test takers to show their ability of combining language skills as in real life communication situations. These above-mentioned characteristics should be paid attention to and included in communicative language tests.
To sum up, much has recently been written about communicative language testing. Discussions have focused on the desirability of assessing the ability that takes part in the acts of communication. All interests assume that it is communicative competence that teachers want to test. Tests should therefore assess the learner’s communicative behavior and not be based on linguistic items alone. In taking communicative tests, student’s performance should be measured not only in terms of formal correctness, but also primarily in terms of interaction, for the concern is not how much the students know, but how well they can perform.

C.   Communicative Testing

1.     Communicative Testing in Listening skills
Communicative listening tests design requires (1)authentic texts e.g. conversations, interviews, broadcasts, telecasts, extended talk, and entertainment; (2) tasks e.g. transcoding, and scanning; (3) channel through which messages are conveyed from the sender to the receiver and (4) response mode which is usually oral but in some instances, could also be written or nonverbal.
For example;
1.1.            Information Gap
 is a type of communicative activity in which each participant in the activity holds some information other participants don't have and all participants have to share the information they have with other participants in order to successfully complete a task or solve a problem.
Example;
         There are two shuttles leaving Atlanta airport for Auburn, one at 11:00 am, and the other at 8:00 pm. Someone is coming to Auburn from Chicago. She or he wants to find a flight that arrives in Atlanta airport 45 minutes to an hour before the shuttle leaves for Auburn so that he or she can have enough time to catch the shuttle, but does not have to wait for too long. Four students participate in this activity. Their task is to find two flight they meet the above criteria. Each student has the flight information for only one of the following four airlines, Delta, Northwest, United, and American. They have to share the flight information they have to identify two flights that best fit for this person's need.

 














1.2.         Dictation

            Dictation can be also an example of communicative test, as long as the students have a role that teacher asked and designed. The activity of taking down a passage that is read aloud by a teacher as a test of spelling, writing, or language skills.        
For example;
Teacher gives a role to the students, some of them as secretary and some of them as boss. Someone who acts as secretary will  listen a dictation from their boss. Students should write something important what their boss dictates to them.

2.      Communicative Testing in Speaking skills
      The activities that involve speakers in using language for the purpose of achieving a particular goal or objective in a particular speaking situation.

2.1.    Role Play
For example;
Student
You missed class yesterday. Go to the teacher's office and apologize for having missed the class. Ask for the handout from the class. Find out what the homework was.
Examiner
      You are a teacher. A student who missed your class yesterday comes to your office. Accept her/his apology, but emphasize the importance of attending classes. You do not have any extra handouts from the class, so suggest that she/he copy one from a friend. Tell her/him what the homework was.

2.2.   Interviews
      The teacher will give a role to the students as Job interviewer who want to get a job in job interview, and a teacher acts as someone who wants to interview them.
For example;
Interview:
          Teacher: Tell me about yourself?
          Students: ……………………………….
           Teacher: what is your strength?
          Student : …………………………………..
          Teacher: can you work in team?
          Etc.

 








2.3.     Problem Solving

      In problem solving the teacher give a role for one student to be someone who has a real-life problem, and another student will be an expert who can give an advice to fix the problem.
For example;
The students could be citizens of a town on a river that is receiving so much pollution from the town that neighbors downstream have requested that the town rein themselves in before they are forced to involve a higher authority. Some could role-play farmers whose crops need fertilizer. Others could represent the union of workers from a factory that disposes of waste in the river.
 










3.      Communicative testing in writing skills

    Some tests combine reading and writing in communicative situations. Testees can be given a task in which they are presented with instructions to write a letter, memo, summary, etc., answering certain questions, based on information that they are given.
For example;
3.1.            Business Letter
Read the letter from the customer and the statement of the company policy about returns and repairs below and write a formal business letter to the customer.
Situation
    Your boss has received a letter from a customer complaining about problems with a coffee maker that he bought six months ago. Your boss has instructed you to check the company policy on returns and repairs and reply to the letter.

 





3.2.            Personal Letter

         Write a short letter to one of the people in the story.
Dear Mr. Eastwood,
                   I think you are a hero because you saved your family from the fire in your house. 
                                                Sincerely,
                                                Katie
 










D.    Summary

1.      Communicative test should be in the form of open-ended questions
For example;
An OPEN question has more than one possible answer
Question:
            Why did you like the story?
Answer: 
            I liked the story because…
Result:
            All students answers should be different.  There are many correct answers.


2.      Open questions usually ask WHY or HOW and require original, unique answers from students.
REFERENCES

Miyata, Nick and Langham, S. 2000. Communicative Language Testing. The British Council, Tokyo. Accessed on March 5th, 2013
Kitao, Kenji. 1996. Communicative Competence. TESL Journal, Vol. II, No. 5. Accessed on March 5th, 2013
Skehan, Peter. 1990. Communicative Language Testing. TESOL Journal, Vol. X, No. 1. Accesed on 5th, 2013
Harsono, M.Y. 2005. Developing Communicative Language Test for Senior High School. TEFLIN Journal, Vol. 16, No. 2. Accesseed on March 6th, 2013









Spoken Interview


Spoken Interview
A.    Mode
·                Question and Requests for Information
Yes/ No questions should generally be avoided, except perhaps at the very beginning of the interview, while the candidate is still warming up. Performance of various operations (of the kind listed in the two sets of specifications above) can be elicited through requests of the kind :
   “Can you explain to me how/why … ?” and
   “Can you tell me what you think of … ?”

Requests for elaboration : such as “What exactly do you mean ?” , “Can you explain that in a little more detail ?”, “What would be a good example of that ?”
Appearing not to understand “I’m sorry but I don’t quite follow you.”
Invitation to ask questions : “Is there anything you’d like to ask me ?”
·                Pictures
Single pictures are particularly useful for eliciting descriptions. Series of pictures (or video sequences) form of natural basis for narration.
http://year4holbrookps.files.wordpress.com/2012/07/butterfly.jpg?w=1200
One common stimulus material could be a series of pictures showing a story, where the testee should describe. It requires the testee to put together a coherent narrative.
series of picture.jpg
However, there is a problem in using visual stimulus in testing speaking, it lies in that the choice of the materials used must be something that all the testees can interpret equally well, since if one testee has a difficulty understanding the visual information, it will influence the way he/she is evaluated (Kitao & Kitao, 1996).   
·         TOEFL Way
      The purpose of the TOEFL test is to evaluate the English proficiency of people whose native language is not English. The TOEFL scores are primarily used as a measure of the ability of international students to use English in an academic environment.
               The Speaking section measures test takers’ ability to speak English effectively in educational environments, both inside and outside of the classroom. The Speaking section consists of six tasks: Two of these tasks are independent; that is, test takers receive no oral or written test materials. On this task, test takers respond to a relatively general question on a familiar topic. The other four tasks assess integrated skills. On two of these tasks, test takers respond to both an oral and a written stimulus; in the other two integrated tasks, they respond to an oral stimulus. The tasks follow this format:
Independent Speaking Tasks
For these two questions, test materials are designed so as not to constrain examinee responses. On one task, test takers respond to a question concerning a personal preference. On the other task, they answer a question that asks them to make a choice.
Four Integrated Speaking Tasks
These tasks assess integrated skills, requiring test takers to respond orally both to oral and to written stimuli. The types of integrated tasks are as follows:
Read/Listen/Speak (Campus situation). Test takers read a passage, listen to a speaker express an opinion about the passage topic, and then give an oral summary of the speaker’s opinion.
Read/Listen/Speak (Academic course topic). Test takers read a passage that broadly defines a term, process, or idea from an academic subject. They then listen to a lecture that provides specific examples to illustrate the term, process, or idea expressed in the reading passage. Test takers then respond orally, combining and conveying important information from both the reading passage and the lecture.
Listen/Speak (Campus situation). Test takers listen to a conversation about a student-related problem and two possible solutions. Test takers must demonstrate understanding of the problem and orally express an opinion about the best way to solve it.
Listen/Speak (Academic course topic). Test takers listen to an excerpt from a lecture that explains a term or concept and gives concrete examples to illustrate it. Test takers must then orally summarize the lecture and demonstrate their understanding of how the examples relate to the overall topic.

·                Interpreting
It’s not intended that candidates should be able to act as interpreters (unless that is specified). However simple interpreting tasks can test both production and comprehension in a control led way.
Situation of the following kind can be set up :
The native language speaker wants to invite a foreign visitor to his or her home for a meal. The candidates has to convey the invitation and act as an interpreter for the subsequent exchange

Comprehension can be assessed when the candidate attempts to convey what visitor is saying. The limitation is difficult to obtain sufficient information on candidates’ power of comprehension.
·                Prepared Monologue
This technique could be appropriate in a proficiency test for teaching assistant , or in an achievement tests when the ability to make presentations is an objective of the course. The limitation of this mode is frequently misused. It should only be used where the ability to make prepared presentations is something that the candidates will need

·                Reading aloud
It is a way to test pronunciation separately from the content of speech. If it is necessary to use this method of testing, the test should at least make use of a situation where the student might actually be reading aloud, such as reading instructions or parts of a letter to another person..
The limitation is that this is not generally a good way to test speaking. Its backwash effect is likely to be harmful, and it is not a skill that is used much outside of the classroom. 


B.     TOEFL Stages
            The TOEFL test review process has three main stages: a content review, a fairness review, and an editorial review that focuses on both content and formatting. Additionally, when required, a subject matter expert checks the accuracy and currency of the content.

Content Review
At this stage, assessment specialists review stimuli and items for both language and content, considering questions such as these:
• Is the language in the test materials clear? Is it accessible to a nonnative speaker of English who is preparing to study or is studying at a university where English is a medium of instruction?
• Is the content of the stimulus accessible to nonnative speakers who lack specialized knowledge in a given field (e.g., geology, business, or literature)?
For multiple-choice questions, reviewers also consider factors such as the following:
• The appropriateness of the point tested
• The uniqueness of the answer or answers (the item keys)
• The clarity and accessibility of the language used
• The plausibility and attractiveness of distracter choices — the incorrect options
For constructed-response items (speaking, writing) the process is similar but not identical. Reviewers tend to focus on accessibility, lack of ambiguity in the language used, and on how well they believe the particular speaking or writing item will generate a fair and scorable response. It is also essential that reviewers judge each speaking or writing item to be comparable with others in terms of difficulty. Expert judgment, then, plays a major role in deciding whether a speaking or writing item is acceptable and can be included in an operational test.
This peer review process is linear; reviewers move all test materials through predetermined stages. Test materials move to the next stage of review only after a reviewer signs off, signifying approval.

Fairness Review
ETS Standards for Quality and Fairness (2002) mandates fairness reviews. This fairness review must take place before using materials in a test.
Because attention to fairness is such an integral part of the test design, all assessment specialists undergo fairness training — in addition to item writing training — relatively soon after their arrival at ETS. As part of their training to develop TOEFL test materials, item writers must become familiar with the ETS Guidelines for Fairness Review of Assessments (2009) and the ETS International Principles for Fairness Review of Assessments (2007) and use them when reviewing items and stimuli. The content review process itself, therefore, always includes fairness as an aspect of development.
In addition, specially trained and periodically calibrated fairness reviewers conduct a separate and independent review of all TOEFL test materials. TOEFL assessment specialists may not perform this official fairness review of TOEFL materials; the official fairness reviewer may be an assessment specialist who works on other ETS tests. In this way, the fairness review is more objective and the reviewer brings no sense of ownership of the test into the review. When fairness reviewers find unacceptable content in the test materials, they issue fairness challenges. The content reviewer assigned to immediately follow the fairness reviewer must resolve the challenge to the satisfaction of both reviewers. For rare cases in which the reviewers cannot reach agreement, there is a process in place known as fairness adjudication, in which a panel that includes the content and fairness reviewers adjudicates the issues at hand and comes to a resolution.
Validity concerns underlie all aspects of fairness review. To ensure the validity of a test, it is paramount that only construct-relevant factors affect test takers’ scores. The construct can be defined as all of the knowledge, skills, and abilities that a test is supposed to measure. A primary goal of fairness review, then, is to identify and reduce construct-irrelevant aspects of stimuli or items that might hinder test-taker performance or even, in construct-irrelevant ways, enhance test-taker performance. Minimizing the influence of construct-irrelevant test content enhances fairness and thus also the validity of test scores.

Editorial Review
All TOEFL test materials receive an editorial review. This review’s purpose is to ensure that language in the test materials (e.g., usage, punctuation, spelling, style, and format) is as clear, concise, and consistent as possible. Editors ensure that established ETS test style is followed. In addition, when warranted, editors check facts in stimuli for accuracy or to ensure that the stated facts are currently true; in areas such as physics or geography, for example, changes in facts occur periodically.








·          
C.    Pragmatics
A.                 The Billingual Syntax Measure
            Its cartoon drawings were naturally motivating to preschooler and children in the early grades. Compare for instance the picture 2 with its drab counterpart picture 1. In picture 1, the intent of the picture displayed is to elicit one-word name of object. There is something unnatural about telling the examiner the name of an object when it is obvious to the child that the examiner already knows the correct answer. By contrast the question ‘How come he is so skinny?’ ( while the examiner is pointing to the skinny man in picture 2) requires an inference that a child is usually elated to be able to make. It makes sense in relation to a context of experience that the child can relate to. It has pragmatic point. Whereas, question like ‘what’s this a picture of?’ have considerably less pragmatic appeal. They do not elicit speech in relation to any meaningful context of discourse.
                                    
                                    Picture 1                                              picture 2

                        The questions ask in relation to the series of picture that comes at the end of the BSM suggest possibilities for elicitation of meaningful speech. In picture below, where all three of the partinent pictures are displayed, in the first picture the King is about to take a bite out of a drumstick. In the same picture the little dog to his left is eyeing the fowl hungrily. In the next picture, while the King turns to take some fruit off a platter the dog makes off with the bird. In picture 3 the King drops the fruit and with eye is licking the olde chops. The story has point. It is rich in linguistic potential for elicitng speech from children.
Paraphrases of the questions asked in the BSM
  1. The examiner points to the first picture in the sequence (pic 5) and ask the child to point out the King
  2. Then the child is asked to point to the dog in the second picture (pic 6)
  3. Next, the last picture (pic 7) is indicated and again the child is asked to point out the King.
  4. The first scored question in relation to these pictures asks why the dog is looking at the King (pic 5)
  5. What happened to the King’s food?
  6. What would have happened if the dog hadn’t eaten the food?
  7. What happened to that apple? Examiner points to the third picture
  8. Finally the child is asked why the apple fell
In order to protect the security of the test, only questions 5, 6, and 7 above are given in the exact form in which they appear in the BSM.
There are other possibilities, however
For example, in response to question 5 the child might say something like, ‘The king ate it all up’ which is by their scoring syntactically correct but is pragmatically inaccurate. Or he might say, ‘The dog eated it’ where the syntactic form is not quite right but the pragmatic sense is accurate. It tends to be correct if the syntactic scoring where any  grammatically correct though the child’s response seems strange.
Better still, it would be possible to ask questions that are designed to elicit more complex pragmatic mappings. For example, the first three questions might be kept as warm-ups and followed by
A.            What is the King getting ready to do? (Examiner is pointing to picture one)
B.            What is the dog doing? (Examiner points to one)
C.            What is the dog gonna do? (examiner points to two)
D.            Why didn’t the dog just take the food in this picture? (Examiner points back to  one)
E.             What is the King doing in this picture? (Examiner points to two)
F.             What is the King so surprised about in this picture? (picture three)
G.            Why do you think the dog is winking his eye in this picture? (picture three)
H.            What do you think would have happened if the King had kept his eye on his food in this picture? (picture two)

 
Picture 1                                              picture 2                                  picture 3
The Ilyin Oral Interview
Ilyin interview is the more typical and the more pragmatically oriented than Upshur.  A page from the student’s test booklet used during the interview is displayed as picture below. The pictures attempt to summarize several days’ activities in terms of major sequences of events on those days.
 
Ilyin explain in the Manual (1976) that the examinee is to relate to the pictures in terms of whatever day the test is actually administered on. The point is not to recommend the particular procedures of Ilyin’s test, but rather to show how the technique could be adapted easily to a wide variety of approaches.
The examinee knows how the procedure works and understands the meaning of the time slots referred to by the separate picture. For instance, ‘what is the man in the picture doing right now?’ it’s about 10:00 am. An appropriate response might be: ‘He’s  in class taking notes while the professor is writing on the blackboard.’ From there, more complex questions can be posed by either looking forward in time to what the pictured person, say Bill (the name offered by Ilyin) is going to do, or what he has already done. For instance, we might ask:
1.      What was Bill doing at 7:15 this morning?
2.      Where is he going to be at lunch time?
3.      Where was he last Sunday at 7:45 in the morning?
It is possible to follow a strategy in the construction of such tasks of working outward from the present moment either toward the past or toward the future. One might follow the strategy of chronologically ordered questions that generate something like a narrative with past to present to future events guiding the development. On the other hand, one opts for one of the more pragmatic strategies, say of merely following of events in the series, asking simpler questions at the beginning and more complex one later in the series. An example of a relatevely simple question would be: where is Bill now? It’s 10:00. A more complex question would be: what was Bill doing yesterday at 10:25?
Clearly the technique could be modified in any number of ways to create more difficult or less difficult task, e.g story retelling task, to make it simpler by asking appropriate leading questions.
Upshur Spoken Communication Test
Upshur and his collaborators set up a test to assess productive communication ability as follows:
  1. Examinee and examiner are presented with four pictures which differ in certain crucial respects on one or more ‘conceptual dimensions’
  2. The examinee is told which of the four he is to describe to the examiner.
  3. The examinee tells the examiner which picture to mark and examiner makes a guess.
The number of hits, that is, correct guesses by the examiner is the score of the examinee.