图书介绍

外教社博学文库 大规模英语考试作文评分信度与网上阅卷实证研究pdf电子书版本下载

外教社博学文库  大规模英语考试作文评分信度与网上阅卷实证研究
  • 王跃武著 著
  • 出版社: 上海:上海外语教育出版社
  • ISBN:9787544639842
  • 出版时间:2015
  • 标注页数:359页
  • 文件大小:52MB
  • 文件页数:385页
  • 主题词:计算机系统-评分-应用-英语-写作-考试-研究

PDF下载


点此进入-本书在线PDF格式电子书下载【推荐-云解压-方便快捷】直接下载PDF格式图书。移动端-PC端通用
种子下载[BT下载速度快] 温馨提示:(请使用BT下载软件FDM进行下载)软件下载地址页 直链下载[便捷但速度慢]   [在线试读本书]   [在线获取解压码]

下载说明

外教社博学文库 大规模英语考试作文评分信度与网上阅卷实证研究PDF格式电子书版下载

下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。

建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如 BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!

(文件页数 要大于 标注页数,上中下等多册电子书除外)

注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具

图书目录

Chapter 1 Introduction 1

1.1 Rationale for the study 2

1.2 Objectives of the study 3

1.3 Organization of the thesis 5

1.4 Definition of terms 7

1.4.1 Online 7

1.4.2 Marking 8

1.4.3 Online marking 8

1.4.4 Online Marking System(OMS) 9

1.4.5 Local Area Network(LAN) 10

Chapter 2 Research Questions and Methodology of the Study 11

Chapter 3 Issues in the Direct Testing of EFL/ESL Writing Ability 14

3.1 Introduction 14

3.2 What is a direct writing test? 16

3.3 EFL/ESL writing ability:What shall we test? 16

3.4 Issues in validity 21

3.4.1 What is validity? 21

3.4.2 Types of validity 21

3.5 Issues in reliability 25

3.5.1 What is reliability? 25

3.5.2 Methods of judging reliability of writing assessments 25

3.6 The relationship between validity and reliability 28

3.7 Four components of a direct writing test 29

3.7.1 The task 29

3.7.2 The writer 32

3.7.3 The scoring procedure 34

3.7.4 The rater 37

3.8 Washback 39

3.8.1 Washback in general 39

3.8.2 Washback of direct tests of writing 42

3.9 Practicality 44

3.10 Summary 44

Chapter 4 The CET Writing Test 45

4.1 Introduction 45

4.2 The writing test required by the CET 47

4.2.1 A direct test 48

4.2.2 Positive washback 48

4.3 The scoring of CET compositions 50

4.3.1 The scoring approach currently adopted 51

4.3.2 Procedures involved in scoring CET essays 51

4.3.2.1 Scoring Principles and Marking Scheme 52

4.3.2.2 Range-finders and sample essays 53

4.3.2.3 Rater training 54

4.3.2.4 Rating process 55

4.3.2.5 Monitoring raters'scoring during the scoring sessions 55

4.3.2.6 Recording essay scores 56

4.4 Computer-aided adjustment of writing scores 56

4.5 Discussion 64

Chapter 5 The First Experimental Study 67

5.1 Introduction 67

5.2 Compositions 68

5.3 Participants 69

5.4 Data collection procedure 71

5.5 The introspection and retrospection studies 74

5.5.1 Introduction 74

5.5.2 Data elicitation 76

5.5.3 Tape transcription 77

5.5.4 Data analysis 77

5.6 The questionnaire studies 78

5.6.1 Design of the questionnaires 78

5.6.2 Analysis of questionnaire responses 79

5.7 Findings from the introspection,retrospection and questionnaire studies 86

5.7.1 Issues and problems in rating CET essays online 87

5.7.2 Decision-making behaviors while rating CET-4 essays 88

5.7.3 Summary of comments made by the raters on essays 91

5.7.3.1 Overall summary 91

5.7.3.2 Variations in raters'comments 93

5.7.4 Essay elements'influences on raters'decision-making 93

5.7.5 Elements of good CET essays in the raters'eyes 96

5.8 Analysis of writing scores 98

5.9 Summary and discussion 107

5.9.1 About the issues and problems involved 107

5.9.2 About the raters'scoring decisions 107

5.9.3 About the writing scores 108

Chapter 6 The Second Experimental Study 110

6.1 Introduction 110

6.2 Compositions 111

6.3 Participants 112

6.4 Data collection procedure 112

6.5 Problems encountered 113

6.6 Data analysis 114

6.7 Results 114

6.8 Summary 122

Chapter 7 Design of the OMS 123

7.1 Introduction 123

7.2 Literature review on online marking of compositions 124

7.2.1 Automated scoring of essays 124

7.2.1.1 Overview of four major automated scoring methods 125

7.2.1.2 Analysis of the four major automated scoring methods 132

7.2.1.3 Summary 138

7.2.2 Online scoring of essays by human raters 139

7.2.2.1 Overview of online scoring of essays by human raters 140

7.2.2.2 Empirical research on online scoring of essays by human raters 143

7.2.2.3 Summary 146

7.3 A preliminary model of marking essays online 147

7.4 Overview of the CET Online Marking System(OMS) 148

7.4.1 The data management module 149

7.4.1.1 Basic information management 150

7.4.1.2 Essay management 151

7.4.1.3 Search and report 151

7.4.2 The training module 152

7.4.3 The rating module 152

7.4.4 The monitoring module 153

7.5 Operation of the OMS and the rater interface 153

7.5.1 Overview of the operation of the OMS 153

7.5.2 The OMS rater interface 155

7.6 Main features of the CET OMS 161

7.6.1 Random distribution of scripts 161

7.6.2 Efficient score recording 162

7.6.3 Online real-time monitoring of scoring 162

7.6.4 Quality control of raters 163

7.6.4.1 Adherence to the CET Scoring Principles and Marking Scheme 164

7.6.4.2 Rater training 166

7.6.4.2.1 Compulsory training 167

7.6.4.2.2 Individual rater's self training 170

7.6.4.2.3 Forced training 171

7.6.4.3 Online discussion 172

7.6.4.4 Back-reading and score revising 173

7.6.4.5 Time control 173

7.7 Advantages of the CET OMS 175

7.7.1 Real and efficient random distribution of scripts at the national level 175

7.7.2 Real-time online monitoring of raters 175

7.7.3 Assured quality control of scoring 177

7.7.4 Overall efficiency 179

7.7.5 Efficient and economical storage of scripts 180

7.7.6 Express retrieval of scripts and scores 180

7.7.7 Efficient management and potential utilization of test data for research 180

7.8 Limitations of online scoring and solutions 182

7.9 Summary 184

Chapter 8 The Third Experimental Study 185

8.1 Context of the experiment 185

8.2 Participants 186

8.3 Compositions 188

8.4 Data collection 189

8.4.1 Step 1:Online marking 190

8.4.1.1 The first round online marking 190

8.4.1.2 The second round online marking 198

8.4.2 Step 2:Conference marking 198

8.5 Data analysis 199

8.6 Results 200

8.7 Summary and discussion 215

Chapter 9 Data Analysis Using FACETS 219

9.1 FACETS and method 219

9.2 The first approach:comparison of rater severity and consistency from the online setting and the conference setting 222

9.2.1 Rater severity and consistency:the online setting 222

9.2.1.1 Rater severity:the online setting 224

9.2.1.2 Rater consistency:the online setting 226

9.2.2 Rater severity and consistency:the conference setting 227

9.2.2.1 Rater severity:the conference setting 229

9.2.2.2 Rater consistency:the conference setting 231

9.2.3 Comparison of rater severity and consistency in two settings 231

9.2.4 Comparison of rater severity change between two settings 233

9.3 The second approach:bias analysis 234

9.3.1 Bias analysis:rater by essay interactions 235

9.3.2 Bias analysis:rater by setting interactions 237

9.4 Conclusion 240

9.5 Discussion 241

Chapter 10 Summaries,Discussions,Implications and Recommendations 243

10.1 A refined model of online scoring of CET essays and its main features 244

10.2 Benefits proceeding from online scoring 247

10.3 Practicality 249

10.4 Scoring quality 252

10.5 Raters'comments 254

10.6 Suggestions for the improvement of the Online Marking System 255

10.7 Implications for other writing tests 256

10.8 Suggestions and recommendations for future research 257

10.8.1 Suggestions for future research in online marking of compositions 257

10.8.2 Recommendations for future research in EFL writing assessment 261

10.9 Theoretical and practical significance of the study 265

References 267

Appendices 281

后记 358

精品推荐