Quality of IT Enabled Services in Higher Education Institutions in Saudi Arabia

Table of contents

1. I. Introduction

ccess to education is a fundamental right of each child and making this access better is an obligation of the government. The emergence of IT and its utilization in the education sector has helped the students at all levels, to improve their capability to learn and without need to memorizing text but by learning the conceptual grounds and theories. Thus, IT has played its role in making the teaching and learning, not only interesting but also effective in the recent years. The role of IT Enabled Services (ITES) has been vital in the higher education institutes as well and now, as the baseline of the ITES has been established at most institutes it is becoming important to evaluate the quality of ITES at different institutes. In this paper, we focus on two higher education institutions from public and private sector. We have chosen universities in Saudi Arabia as the study is focused to make a comparison of the ITES in Saudi universities.

Considering the nature of the study, two leading universities, one each from government and private sector was selected to participate in the study as they exist in same city. The public sector university (referred as A in the rest of this paper)was established in the fifties and is one of the oldest university in the kingdom while the private sector university (referred as B in the rest of this paper) was established in the nineties. It is also important to mention that the current student enrolment at the private university is around 3,500 while the public sector university has 10 times mores enrolment, and so is the ratio in the staff of the universities. The purpose of this study is to compare the state of the ITES in the Saudi universities.

2. II. Literature Review

In order to compare the state of the art it is important to establish the parameters based on which the comparisons among the universities can be made for the quality of ITES. Some recent work has been carried out in this domain which is presented in the this section. Several researchers [1] [5], including Alanezi and Yang have mentioned that the 'Accessibility' factor is vital in nature for measuring the quality of ITES. Tan and Burgess [5][1] [4] have advocated the need for customization as a major player in the quantification of the ITES for the higher education while Parasuraman and George [1][6][2] [7] are of the view that delivery of teaching and the efficiency of the ITES is also important.

[2][3][4]

Alanezi, Lin, Sedera and Swaid [1][8] [9][10] have identified the importance and have advocated the existence of the factors like functionality and information quality. Both these factors form the core of ITES and are valuable in their nature and existence. Zeithaml [2] has found that some factors like response time, service usability, system integrity and trust are important factors in the quantification of the quality measurement. These factors govern the environmental factors and responsiveness of the system and are vital to measure the quality of the system instead of functionality of the system. Tan, George and Burgess [1][4] [7] have advocated the presence of security as an integral factor to measure the quality of ITES. Apart from that, some researchers like Burgess [5] have considered that the factors like site design, service usability and service reliability have a great value in the measurement of the quality of the ITES. Aziz [11] in her research shortlisted these seventeen items to evaluate the quality of the ITES in the higher education. The shortlisting was done from more than 100 elements based on the recurrence, relevance and importance which was determined by the expert opinion. The factor, its description and the citation of the survey is given in Table 1.

A 12 Service usability Service usability factor refers to the degree to which the users find it easy to use the various ITES.

[2]

13 Site design Site design factor measures the quality of site design in terms of user satisfaction and ease of use. [5] 14

3. System integrity

The provision of consistent information at all times.

[2] 15 Trust How reliable, efficient and responsive a system is.

[2]

16 Usefulness Usefulness is the degree to which the users find it easier to do their work via the ITES.

[5]

17

4. User support

User support factor refers to the degree to which the ITES department personnel are willing to serve the users in case their help and support is required.

[3]

The findings by Aziz [11] form the basis of this study. The findings are contemporary in nature and discuss an evolutionary paradigm of emerging state of the art from the authors of immense repute [12,13]. Ahead of this a considerably sound and current methodology to affirm the findings was used that increase the trust to use this findings of the publication as a base of this research.

5. III. Methodology

This study is a mixed method research [14,15], that has been completed by triangulating the qualitative and quantitative results. The survey was conducted on 300 individuals in each institute and the results were collected. The purpose of the survey was to ask the users about the quality of IT enabled services at their respective institute, against the different factors attained after the comprehensive literature review. Likert scale [ 16] was used to rank the responses on a scale of 1-5, i.e. from poor to excellent, hence, the column 1 in each response list has the weightage 1, the 2 nd column has the weightage 2 and column 3 has the weightage of 3 and so on. Once the sums are accumulated they are divided by the number of total respondents to get the weighted average and this activity is run for both institutes separately. After that the comparison among the results is made by considering each factor to identify that in which area a specific institute is performing better. A qualitative study has been conducted on the same lines where four respondents were interviewed (two from each university) and were asked to identify the standards of the IT enabled service in their respective institutes based on the factors and considering the cotemporary situations [17][18][19][20][21][22][23][24]. In this research we follow the partially mixed sequential dominant status paradigm where the qualitative findings follow the quantitative findings and are dominant. This paradigm is followed in research studies that are centric to evaluate the technology education [25][26][27][28][29][30][31].

IV.

6. Quantitative Study

Considering the scale of the survey it is important to maximize the responses, however it is notable that the responses have to be precise and should come from the experienced users [17,32]. In order to achieve this the means given in Table 2 are used to spread the survey and collect the responses. The effectiveness of these means is given in Table 3 while Figure 1 illustrates the spread of survey call. Confidence level demonstrates the level of confidence that we have on the response to be correct and precise. Usually a confidence level of 95% is used in the research although 99% is used. The confidence interval determines the amount of acceptable results, and is always presented with the ± symbol. If the threshold value is 67 and the confidence interval is 5, it will allow considering values from 62-72 as legitimate. Since the survey has been conducted in two different institute to compare the state of the art of IT enabled services, almost half of the responses came from each institute. A 5-level Likert scale has been used in this research that ranges from poor to excellent. The range is from 1-5 on a quantitative scale. The value for poor is 1 and value for excellent is 5. Every response that choses the 'poor' against some item is multiplied by 1 while the selections like 'somewhat acceptable' is multiplied by2, the choice 'acceptable' is multiplied by 3, the choice 'very good' is multiplied by 4, and the choice 'excellent' is multiplied by 5. The average weighted response is achieved by divining the weighted response over the total number of respondents. It is further important that some questions were not answered by some individuals. For institute A, 261 respondents have responded while some 325 respondents responded for the institute B.

7. Global Journal of Computer Science and Technology

Volume XVI Issue IV Version I ( )

V. Qualitative Study and Triangulation Table 5 and Table 6 summarize the survey response statistics from institute A and B respectively. The results shown in Table 7 , clearly demonstrate that the quality of ITES is better in institute A as compared to institute B in all the factors. Considering these results a qualitative study was formulated where four interviews were conducted to gain an insight of the ITES in the respective institutes. The outcome is given in Table 8. Along with the illustrative description of the ITES quality items, the interviewees preferred to give the absolute numbers in measuring the quality. Four interviews were conducted in total Two interviews were conducted in institute A while rest two were conducted at institute B. The summary of the results is presented in Table 8 which clearly demonstrates that the interviewees (like the survey respondents) believed that the quality of ITES is better in institute A as compared to institute B. In the survey, institute A was observed having lead in the quality factors while in the interviews institute A leads in 12 out of 17 factors, equal in 4, and lags in 1 factor. Figures 2 and 3 depict the quantitative and qualitative analysis respectively. In triangulation process, it is observed that whether the findings of the qualitative method and the quantitative methods converge to similar results? The triangulation process is shown in Table 9.

8. VI. Discussion

There are 17 factors for measuring the quality of ITES in the institutes in Saudi Arabia. Two intuitions, one government and one private university was selected for this purpose in the capital city of Riyadh. The results of the study demonstrate that the quality of the ITES is better in institute A as compared to B. After the completion of the triangulation process the results have not changed much from the initial process, since the findings were very much consistent in the quantitative and qualitative methods. For the factors like 'accessibility', 'delivery of teaching', 'efficiency', 'information quality', 'inter-operability', 'privacy', 'security', 'service reliability', 'service usability', 'site design', 'system integrity', and 'user support' the results of the qualitative and quantitative findings were same. For the factors 'customization', 'functionality, 'trust', and 'usefulness'. the qualitative findings are different from the quantitative findings where in the survey it was established that the institute A is better as compared to institute B but in the interview it was established that both institutes have same standing. It was mentioned in the methodology section that the qualitative findings will have the dominance on the quantitative findings, therefore the qualitative results are observed in case of a disagreement among the qualitative and quantitative findings. Since the results of the qualitative finding demonstrate that the state-of-art of two institutions for these four factors is not different therefore the qualitative findings hold. For one factor 'response time' in the quantitative findings it was observed that the institute A is better in comparison while the results of the qualitative findings are otherwise, but for the reasons mentioned above, the qualitative results are held.

9. VII. Conclusion

It can be summarized that the in order to compare the state-of-art of ITES in Saudi universities 17 factors were identified. Two institutions were compared based on quantitative and qualitative data, and the results have shown that institute A leads with better score on 12 factors while for four factors the scores were equal, while institute B leads only in one factor. It can be concluded that the state-of-art of ITES is much better in institute A as compared to institute B. Institute B needs to be more concerned in improving the quality of the ITES, especially in the areas of accessibility, information security, privacy, and user support. While Institute A needs to improve in customization, usefulness, response time, and trust.

10. VIII. A cknowledgement

Figure 1. 16 Year 2016
162016Quality of IT Enabled Services in Higher Education Institutions in Saudi ArabiaGlobal Journal of Computer Science and TechnologyVolume XVI Issue IV Version I ( )
Figure 2. Table 1 :
1
No Factor Description
1 Accessibility Accessibility is the degree to which the user can access the required service [1][2][3][4][5]
2 Customization The ability to configure the ITES according to requirement [5][1][4]
3 Delivery of teaching It deals with the ways and quality of the teaching. [7]
4 Efficiency How quickly the required services are available. [1][6][2][7]
5 Functionality It describes that what specific tasks can be performed by using the system [1][8]
6 Information quality By what level the available information suits the user. [9][10]
7 Interoperability Access to multiple service [4]
8 Privacy The level to which a person is secure in performing his tasks without being public. [7][6][1]
9 Response time The time between the request and availability of the information [2]
10 Security Security factor reflects the adequacy of security features implemented in the ITES. [7][1][4]
11 Service reliability Service reliability is the percentage of time the ITES is available for use without failure. [5]
Figure 3. Table 2 :
2
No. Mean of Count Responses %
1 Paper 10 10 100
2 Web Link 500 398 80
3 Skype Text 20 10 50
4 Google Talk 50 30 60
5 Phone call 60 40 67
6 40 36 90
7 96 64 67
Total 776 588 75
Figure 4. Table 3 :
3
No. Mean of Sending Survey Count Responses Average Response
1 Paper Survey 10 10 1.7%
2 Web Link 500 398 67.6%
3 Skype Text Request 20 10 1.7%
4 Google Talk Link Forwarding 50 30 5.1%
5 Phone call Requests 60 40 6.8%
6 Text message Requests 40 36 6.12%
7 Facebook messaging 96 64 10.8%
Figure 5. Table 4 :
4
Measure Number
Confidence Level 99%
Confidence Interval 3
Population accessed 776
Sample Size 548
percentage 50
*The actual population size is unknown [9]
Figure 6. Table 5 :
5
Somewhat Very Average Weighted
Items Poor Acceptable Acceptable Good Excellent Response
Accessibility 0 42 270 492 135 3.60
Customization 0 48 972 1584 315 3.41
Delivery of teaching 3 24 252 504 90 3.59
Efficiency 0 36 216 516 180 3.72
Functionality 0 30 278 384 255 3.67
Information quality 3 36 234 504 165 3.65
Interoperability 3 42 331 384 90 3.78
Privacy 3 36 341 492 105 3.88
Response time 3 24 261 456 165 3.65
Security 0 42 243 420 90 3.53
Service reliability 0 24 234 336 165 3.67
Service usability 0 24 297 348 225 3.68
Site design 0 6 234 552 120 3.75
System integrity 0 36 279 456 105 3.56
Trust 0 42 405 348 75 3.37
Usefulness 0 30 297 372 210 3.65
User support 0 36 252 348 225 3.68
Figure 7. Table 6 :
6
Items Poor Somewhat acceptable Acceptable Very Good Excellent Average Weighted Response
Accessibility 12 102 531 312 75 3.17
Customization 0 126 540 228 75 2.94
Delivery of teaching 42 120 531 156 15 2.69
Efficiency 9 132 495 216 120 3.06
Functionality 6 138 504 288 75 3.06
Information quality 21 72 414 384 150 3.24
Interoperability 30 96 432 264 120 3.02
Privacy 24 138 177 120 75 1.70
Response time 9 108 477 288 135 3.17
Security 18 132 468 228 120 3.01
Service reliability 18 138 468 204 75 2.92
Service usability 24 108 441 288 60 2.98
Site design 21 144 477 192 90 2.91
System integrity 18 180 450 168 60 2.81
Trust 6 150 432 264 135 3.10
Usefulness 18 144 495 204 75 2.92
User support 12 144 468 228 90 2.99
Figure 8. Table 7 :
7
Year 2016
19
Note: © 2016 Global Journals Inc. (US)
Figure 9. Table 8 :
8
Year 2016
20
Figure 10. Table 9 :
9
Year 2016
22
1

Appendix A

Appendix A.1

The authors thank Prince Sultan University in the Kingdom of Saudi Arabia for funding this project in the year 2013-2014 under number IBRP-CFW-2013-11-14.

Appendix B

  1. Impact of service quality dimensions in internet banking on customer satisfaction, A George , G G Kumar . 2014. 41 p. . (DECISION)
  2. A M S Basit Shahzad . Application of Quantitative Research Methods in Identifying Software Project,
  3. ES-QUAL a multiple-item scale for assessing electronic service quality. A Parasuraman , V A Zeithaml , A Malhotra . Journal of service research 2005. 7 p. .
  4. Enhanced risk analysis-relative impact factorization. B Shahzad , T Afzal , R Irfan . Information and Communication Technologies, 2005. ICICT 2005. First International Conference on, 2005. p. .
  5. Distributed risk analysis using relative impact technique. B Shahzad , J Iqbal , Z Haq , S Raza . 3rd Asian Conference on Intelligent Systems and Networks, 2006. p. .
  6. Software Risk Management-Prioritization of frequently occurring Risk in Software Development Phases. B Shahzad , J . 2nd International Conference on Information and Communication Technology (ICICT2007), 2007. p. . (Using Relative Impact Risk Model)
  7. Utilizing Technology in Education Environment: A Case Study. B Shahzad , E Alwagait . Information Technology: New Generations (ITNG), 2013 Tenth International Conference on, 2013. p. .
  8. Impact of Change in Weekend Days on Social Networking Culture in Saudi Arabia. B Shahzad , E Alwagait , S Alim . 2014 International Conference on Future Internet of Things and Cloud, 2014. p. .
  9. Factors for Measurement of ITES Quality for Higher Education Institutions in Saudi Arabia. B Shahzad , R Aziz . Global Journal of Computer Science and Technology 2015. 15 p. .
  10. Does a Change in Weekend Days Have an Impact on Social Networking Activity?. B Shahzad , A Alwagait . Journal of Universal Computer Science 2015. 20 p. .
  11. ITmediated customer service content and delivery in electronic governments: An empirical investigation of the antecedents of service quality. C.-W Tan , I Benbasat , R T Cenfetelli . MIS quarterly 2013. 37 p. .
  12. A qualitative assessment of Arab culture and information technology transfer. C E Hill , K D Loch , D Straub , K El-Sheshai . Journal of Global Information Management (JGIM) 1998. 6 p. .
  13. Analyzing interview data: The development and evolution of a coding system. C Weston , T Gandell , J Beauchamp , L Mcalpine , C Wiseman , C Beauchamp . Qualitative sociology 2001. 24 p. .
  14. Mixed method research: Fundamental issues of design, validity, and reliability in construction research. D A Abowitz , T M Toole . Journal of Construction Engineering and Management 2009. 136 p. .
  15. A factor and structural equation analysis of the enterprise systems success measurement model. D Sedera , G Gable . ICIS 2004 Proceedings, 2004. p. 36.
  16. Impact of social media usage on students academic performance in Saudi Arabia. E Alwagait , B Shahzad , S Alim . Computers in Human Behavior 2015. 51 p. .
  17. Pragmatism vs interpretivism in qualitative information systems research. G . European Journal of Information Systems 2012. 21 p. .
  18. Determining the relative importance of mobile banking quality factors. H.-F Lin . Computer Standards & Interfaces 2013. 35 p. .
  19. Toward a conceptual framework for mixed-method evaluation designs. J C Greene , V J Caracelli , W F Graham . Educational evaluation and policy analysis, 1989. 11 p. .
  20. Qualitative research practice: A guide for social science students and researchers: Sage, J Ritchie , J Lewis , C M Nicholls , R Ormston . 2013.
  21. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, J W Creswell . 2009. Sage Publications.
  22. Cultural differences in responses to a Likert scale. J W Lee , P S Jones , Y Mineyama , X E Zhang . Research in nursing & health 2002. 25 p. .
  23. A conceptual framework for understanding and measuring perceived service quality in net-based customer support systems. L Burgess . CollECTeR LatAm Conference, (Santiago, Chile
    ) 2004. p. .
  24. E-mail interviewing in qualitative research: A methodological discussion. L I Meho . Journal of the American society for information science and technology 2006. 57 p. .
  25. A proposed instrument dimensions for measuring e-government service quality. M A Alanezi , A Kamil , S Basri . International Journal of u-and e-Service 2010. 3 p. .
  26. A method of analysing interview transcripts in qualitative research. P Burnard . Nurse education today 1991. 11 p. .
  27. Measuring the quality of e-service: Scale development and initial validation. S I Swaid , R T Wigand . Journal of Electronic Commerce Research 2009. 10 p. .
  28. Taking risky opportunities in youthful content creation: teenagers' use of social networking sites for intimacy, privacy and selfexpression. S Livingstone . New media & society, 2008. 10 p. .
  29. The qualitative research interview. S Q Qu , J Dumay . Qualitative Research in Accounting & Management 2011. 8 p. .
  30. Conceptual Framework for understanding e-service quality: Implications for future research and managerial practice, V A Zeithaml , A Parasuraman , A Malhotra . 2000.
  31. KSU News Portal: A Case Study. Y Al-Ohali , A A Al-Oraij , B Shahzad . International Conference on Internet Computing (ICOMP'11), 2011.
  32. Development and validation of an instrument to measure user perceived service quality of information presenting web portals. Z Yang , S Cai , Z Zhou , N Zhou . Information & Management 2005. 42 p. .
Notes
1
© 2016 Global Journals Inc. (US)
Date: 2016 2016-01-15