Asian Journal of Distance Education
Volume 17, Issue 2, 2022
Is the empirical research we have the research we can trust? A review of
distance education journal publications in 2021
Yiwei Peng, Junhong Xiao
Abstract: This study aims to investigate the trustworthiness of empirical research published in distance
education (DE) journals, an area that has yet to be systematically explored. The review covers 238
empirical studies which were based on primary data and published in 2021 in eight DE journals listed in
Social Science Citation Index (SSCI) or Emerging Sources Citation Index (ESCI). Findings suggest that
they are fairly good in terms of clarity in context of study, purpose of study, and duration if a study
involves an intervention or is otherwise length-sensitive, albeit the existence of small room for
improvement. In contrast, a large proportion of the studies reviewed are less rigorous to different but
considerable and even alarming degrees in terms of research approach and design, sampling strategies,
source of data, researcher bias, ethical concerns, and limitations. Put specifically, less than one quarter
of the sample studies adopted the qualitative approach while over 90% of the quantitative studies
followed the survey and correlational designs, resulting in a staggering disproportion from the
perspective of diversity in research approach and design. Over 60% of the sample studies did not spell
out their sampling strategies and only about 20% of the specified sampling strategies were probabilistic
in nature, limiting the generalizability of the findings. Over 90% employed questionnaires/scales/rubrics
and/or interview protocols to collect data but over 70% of these two types of instruments were neither
reviewed nor piloted before put to use with less than 50% available in full content, hence likely to
undermine the value of the findings. Researcher bias, ethical concerns, and limitations were addressed
in 10%, 50%, and 70% of the studies respectively. Implications for future research are also discussed
in the light of these findings.
Keywords: distance education, empirical research, methodological rigor, journal publication, literature
review
Highlights
What is already known about this topic:
• Major research themes and trends in DE journal publications.
• Citation patterns, journal impact, authorship patterns, and co-authorship network.
• Keywords, theoretical/conceptual frameworks, variables, and population/participants.
What this paper contributes:
• The study focus exclusively on the appropriateness and robustness of research design.
• Lack of methodological rigor that has long plagued the field of DE remains unabated to different
extents in different aspects today.
• Empirical research in DE needs improving, especially in terms of research approach and design,
sampling, data source, and possible weaknesses such as limitations, researcher bias and
ethical concerns.
Implications for theory, practice and/or policy:
• More qualitative and mixed-methods research should be encouraged.
• The tendency to favor correlational and survey research over other types of research design
should be reduced.
• More longitudinal research is needed.
1
Published by Asian Society for Open and Distance Education (ASODE), Japan
ISSN 1347-9008 http://www.asianjde.com/
This is an open access article under the CC BY license
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Introduction
Based on research findings from 1928 to 1998 concerning the effectiveness of technology-enhanced
learning, chiefly distance learning, in comparison with other models of teaching, Russell’s (1999) No
Significant Difference Phenomenon is often cited as evidence to legitimize distance education (DE).
This is because it is taken for granted that the studies included in this work were trustworthy. Similarly,
the meta-analysis of Bernard et al. (2009) is often used to support Anderson’s (2003) equivalency
theorem according to which, among other things, a high level of one type of interaction can result in an
effective educational experience even if the other two types are “offered at minimal levels, or even
eliminated”. Nevertheless, a scrutiny of the empirical studies included in Bernard et al. (2009) shows
that not all of them were as rigorously designed as claimed by their authors. “None of the 11 inclusion
criteria adopted in this study indicates whether each interaction treatment (IT) (i.e., learner–learner,
learner–instructor or learner–content interaction) is independent of the others” (Xiao, 2017, p. 127). In
fact, “studies were categorized by the most prevalent interaction type contained in the independent
variable” (Bernard et al., 2009, p. 1253). It happened that a study which was grouped into a particular
type of interaction actually had more than one type of interaction in it (Xiao, 2017). Therefore, the claim
that the other two types were “even eliminated” is not justified.
Systematic reviews of DE journal publications are not uncommon in the field of DE. They are often
intended to identify major research themes and trends in the publications over a certain period of time.
Bozkurt et al. (2015) explored DE research trends emerging from seven peer-reviewed journals from
2009 to 2013 in terms of “most frequent(ly) indicated keywords, chosen research areas, emphasized
theoretical/conceptual backgrounds, employed research designs, used data collection instruments and
data analysis techniques, focused variables, targeted population and/or participant groups, cited
references, and cited authors” (p. 336). Following up on Bozkurt et al. (2015), Bozkurt and ZawackiRichter (2021) examined research trends and thematic patterns in the publications of six DE journals
between 2014 and 2019. Zawacki-Richter and Naidu (2016) set out to reveal themes and map out trends
in DE research from the first 35 years of publications of Distance Education. Çakiroğlu et al. (2019)
investigated the research trends, major concepts, and cut-off points in the articles published between
2009 and 2016 in five major peer-reviewed journals. Similar studies include Berge and Mrozowski (2001)
and Lee et al. (2004). Bibliographic reviews of DE research have also been conducted to explore citation
patterns in DE research (Martínez & Anderson, 2015), the impact and significance of DE journals in the
field (Zawacki-Richter & Anderson, 2011), and co-authorship network (Gomes & Barbosa, 2018) or with
the intention to help policymakers develop quality DE program (Bishop & Spake, 2003). Zawacki-Richter
et al. (2009) reviewed 695 articles published in five prominent DE journals (2000 -2008) with the purpose
of identifying gaps, priority areas, methods and authorship patterns in DE research. A similar study was
conducted with an exclusive focus on the publications (2000-2015) of a single journal - the International
Review of Research in Open and Distance/Distributed Learning (Zawacki-Richter et al., 2017) while
Bozkurt’s (2019) bibliometric examination of publications and their references in four DE journals aimed
“to investigate and explore the intellectual network and dynamics of the DE field” (p. 499). This group
also includes Bozkurt et al. (2015) and Zawacki-Richter et al. (2010). Zawacki-Richter and Bozkurt
(2022) is the latest and also very comprehensive review of DE journal publications, the focus of which,
however, is not on research design, either.
It seems that no previous systematic reviews of DE literature focus exclusively on the appropriateness
and robustness of research design although a few studies did cover this topic. For example, based on
the same sample of Zawacki-Richter et al. (2009), Zawacki-Richter and von Prümmer (2010) explored
how gender, collaboration and research methods relate to each other. Issues of research design were
also “quantitatively” reported in Bozkurt et al. (2015) and Zawacki-Richter et al. (2009). No doubt,
identification of research themes and trends and bibliometric analysis of journal publications can
contribute to the development of the field. Nevertheless, it is equally relevant to evaluate the rigor and
trustworthiness of research design adopted by DE studies, because whether a research design is
rigorous and trustworthy and whether it is administered appropriately directly determine the validity,
2
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
reliability, dependability, believability as well as generalizability of the findings. For example, unless the
empirical studies analysed are rigorous in design and appropriately administered, results of a metaanalysis may not be trustworthy. Bernard et al. (2009) mentioned above is a case in point. Also, if the
findings of a study are unreliable, it may misguide and misinform further research and practice, possibly
doing more harm than good to the field. This is a gap that has yet to be filled and the current study aims
to fill this gap by answering the following research question: how trustworthy are empirical studies
published in major DE journals in terms of research design and administration?
Methodology
Sampling
The first step is to select journal samples. The primary concern is to ensure the representativeness of
the samples in terms of quality, impact, and internationality. With this basic principle in mind, the
following inclusion criteria are developed and used in this study:
•
Listed in Social Science Citation Index (SSCI) or Emerging Sources Citation Index (ESCI);
•
Published in English;
•
Had a major focus on DE;
•
Peer-reviewed;
•
Had an international editorial team.
There are eight journals meeting these criteria:
•
American journal of distance education (AJDE);
•
Distance Education (DE);
•
International Journal of Distance Education Technologies (IJDET);
•
International Review of Research in Open and Distance Distributed Learning (IRRODDL);
•
Online Learning (OnL);
•
Open Learning (OL);
•
Open Praxis (OP);
•
Turkish Online Journal of Distance Education (TOJDE) (see Table 1).
Table 1: Sample journals (statistics latest until June 22, 2022).
Peer-reviewed
Editor
Associate editor
Editorial board
AJDE
DE
Citation
index
ESCI
SSCI
Yes
Yes
1 (USA)
1 (Australia)
21 (8 countries)
30 (11 countries)
IJDET*
ESCI
Yes
1 (Canada)
3 ( USA and Spain)
4(China, USA, Germany
and Canada)
6 (Saudi Arabia, China,
Italy, US, Canada,
Cyprus)
2 (Canada)
17 (USA and Germany)
40 (18 countries)
IRRODDL
SSCI
Yes
1 (Canada)
20 (12 countries)
OnL**
ESCI
Yes
1 (USA)
18 (4 countries)
OL
ESCI
Yes
3 (UK)
12(7 countries)
OP
ESCI
Yes
1 (Spain)
7 (7 countries)
TOJDE ***
ESCI
Yes
1 (Turkey)
1 (Turkey)
75 (32 countries)
Notes: * IJDET has a Managing Editor from Taiwan, China.
** OnL has an Advisory Board with 7 members from USA, Ireland, UK and Canada and an Editorial Review Board
with 67 members from 11 countries.
***TOJDE has an Honorary Editorial Board with 11 members from Turkey, Sweden, Canada, Ireland, USA and
Australia.
The next step is to select articles published in these journals. There is no software to process data for
the purpose of this study, which means data has to be manually analysed. To avoid being overwhelmed
by the workload, the scope of selection was limited within the timeframe of one year. The Year 2021
was chosen in order to reflect the latest scenario of DE research. Inclusion criteria are:
3
Asian Journal of Distance Education
•
•
•
•
Peng, Y., & Xiao, J.
Empirical study;
Primary/first-hand data;
Published in 2021;
Full-text available.
With these criteria in mind, the researchers visited the websites of the sample journals, making
preliminary assessment by reading article titles, abstracts and keywords. 245 promising articles were
then shortlisted for further examination. Seven of them were excluded for being based on secondary
data (4 studies), not empirical (1 study) and not available in full-text (2 studies) despite repeated efforts
to locate them. The final sample consists of 238 articles (see Table 2).
Table 2: Sample articles.
Number of articles
15
21
18
32
66
8
19
59
238
AJDE
DE
IJDET
IRRODDL
OnL
OL
OP
TOJDE
Total
Number of issues
4
4
4
4
4
3
3
4
30
Assessment rubric development
Although the second author is an experienced reviewer and editor, in an attempt to overcome researcher
bias, a review of research design literature was conducted to serve as the foundation upon which an
assessment rubric was to be developed.
Put specifically, the rubric proposed (Table 3) is informed by Azevedo et al. (2011), CASP (2006),
Cathala and Moorley (2019), Chenail(2010), Cohen, Manion and Morrison (2007), Creswell (2012),
Drummond and Murphy-Reyes (2018), Jack et al. (2010), and Penn State University Libraries (2021).
The draft rubric was then evaluated by two experienced DE researchers and editors whose feedback
was adopted to improve the rubric.
Table 3: Assessment rubric.
Context of study
Hypothesis,
research question or
purpose
Approach
Design
Sampling strategy
Data collection
Hints
Was the context of study clearly stated?
Was there a clear statement of the hypothesis, research question or purpose of the study?
Did the study adopt a quantitative, qualitative or mixed approach?
Experimental
If the study was designed as an experiment, was it a true experiment or quasiexperiment?
Non-experimental
If it was not an experimental study, what was it? Identify the design used in the
study in line with Creswell’s (2012) typology. If a study does not fit into any of
the eight types in Creswell (2012), did the author name the research design
used? If yes, add it as a new category to Creswell’s (2012) list; otherwise, look
for evidence, classify it into a proper category and add to the existing list.
Duration
If it involved an intervention/treatment or was length-sensitive, was the
duration specified?
Was the sampling or grouping strategy clearly stated?
If yes, what strategy was used?
Sources of data
What kind of data was collected to answer the research question(s)?
Most frequentlyWhat instrument was most frequently used? Was it developed exclusively for
used Instruments
the study or adapted from an existing one by the researcher? Or was it an
existing instrument developed elsewhere? Was the instrument reviewed and
4
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
piloted, or only reviewed, or only piloted, or neither reviewed nor piloted,
before it was used to collect data for this study? Was the content (for example,
questionnaire items or interview protocol) available in its entirety?
Was data collection, including the how, who and when, clearly stated?
Was the data collected only at one point in time or over a period of time?
Data analysis
Researcher bias
Ethical issue
Limitation
Administration
Cross-sectional or
longitudinal
Was the method of analysis appropriate? Was the analysis process rigorous?
Did the researcher explain how his/her own potential bias and influence was managed throughout
the study, including sampling, data collection, and/or intervention administration?
Were ethical issues, if any, adequately addressed? For example, informed consent, confidentiality,
anonymity, or approval from an ethics committee?
Was there any limitation acknowledged by the researcher?
Analysis procedure
Given the purpose of the study, the method of directed content analysis was adopted to analyze the
sample (Hsieh & Shannon, 2005). The two researchers used the assessment rubric to analyze five
articles independently. The results were highly identical with only three discrepancies concerning clear
statement of research purpose and type of design, showing that the rubric was operable. With the
discrepancies solved, the researchers began to review the remaining 233 articles independently. When
all the analysis was done, the results per article were then compared and discrepancies solved by rereading and discussing the section(s) of the article in question to reach a consensus. Meanwhile, to
further ensure the reliability and validity of the analysis as well as reduce possible researcher bias, a
researcher independent of the study was invited to randomly choose 30 articles and review them using
the rubric. The results of his analysis were compared with our own analysis of the same articles. The
agreement rate was 94.7% and disagreements were solved through negotiation.
Limitations
Only empirical studies published in 8 high-impact DE journals in the year 2021 were included in the
sample. Empirical DE research published in other journals or in languages other than English was not
taken into account in this review. Therefore, findings from the study may not reflect the overall picture
of empirical research in the field of DE. That said, the internationality in terms of editorial team,
reviewers, and contributors may, to some extent, compensate for these limitations. Future research may
extend the scope of the samples, namely published within a longer timeframe and in more journals or
in other languages. Additionally, a finer-grained analysis may better reflect the true picture of empirical
studies in the field. For example, when it comes to an intervention, how was blinding in terms of
participants, investigators and/or assessors? How was confounding controlled? How was intervention
integrity ensured? Future research will definitely benefit from finer-grained analysis.
Findings
Context and purpose of the study
Over 90% (n = 220) studies provided their context of study. It would be difficult to contextualize the
interpretation of findings without a clear description of this context. For example, Kohnke (2021) aimed
to investigate “Hong Kong tertiary English teachers’ attitudes and beliefs towards various modes of
professional development in ICT skills to enhance their teaching” (p. 39). However, no description of
English language teaching at tertiary institutions in Hong Kong was given, which means the findings are
de-contextualized.
All the studies contained clear statements of the purpose of study, research questions and/or
hypotheses except ten articles which were somewhat obscure in this regard (for example, Johnson &
5
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Barr, 2021; Liu & Shirley, 2021; Van & Thı, 2021). However, it is worth noting that not all the research
questions were adequately framed. For example, “To what extent does online peer-feedback training in
a MOOC positively influence students’ perception of peer-feedback and peer-feedback training?” (Kasch
et al., 2021). The use of the word “positively” implies that peer feedback is good for learning. This is a
popular assumption. Nevertheless, it does not tell the whole story. There is good peer feedback and
poor peer feedback. Poor peer feedback, if wrongly regarded as useful and blindly adopted, may do
more harm than good to learning. Also, training of peer feedback skills is not the only factor affecting
the quality of feedback. The quality of feedback is, perhaps to a greater extent, correlated with reviewers’
level of subject knowledge concerned.
The purpose of a study determines its participants and source(s) of data. For example, of the four
research questions raised by Lindecker and Cramer (2021, p. 146), two of them would require involving
student participants: “What is the prevalence of student self-disclosure to faculty members?” and “Do
students self-disclose equally to both male and female faculty members?” This is because no one else
is in a better position to answer these questions. Other people can at best do guesswork. Unfortunately,
only faculty members were surveyed in this study. It is the same case with Mac Domhnaill et al. (2021).
Zhu and Chikwa (2021), a study of China-Africa cooperation in open and distance higher education, is
another example. There was a research question about open and distance learning development in
Chinese higher education institutions but no research question aimed to investigate this development in
Africa. Furthermore, the study “employed a combination of qualitative interviews and critical literature
review” (Zhu & Chikwa, 2021, p. 8). However, there is no telling what documents were reviewed and
why only Chinese educators were interviewed in this study.
Research approach and design
Of the 238 studies reviewed, 106 are quantitative (44.5%), 55 qualitative (23.1%) and 77 mixed (32.4%)
in terms of research approach. As regards research design, only 8 (3.4%) are experimental with the
majority (n = 6) being quasi-experimental, and 96.6% (n = 230) are non-experimental (see Table 4).
Table 4: Research design (n = 238).
Category
Experimental
(n = 8; 3.4%)
Non-experimental (n = 230;
96.6%)
Subcategory/type of design
True experiment
Quasi-experiment
Survey design
Correlational design
Mixed-methods (n=61; 26.5%)
Convergent parallel
Exploratory sequential
Explanatory sequential
Embedded
Multiphase
Ethnographic design (including case study and netnographic design)
Content/thematic analysis design
Grounded theory design
Narrative research design
Design-based research design
Phenomenological research design
Action research design
Self-study design
number
2
6
53
45
22
22
8
6
3
34
18
4
4
4
3
3
1
Of the 230 non-experimental studies, slightly over one quarter (n = 61; 26.5%, or 25.6% in relation to
the entire sample) can be categorized as mixed-methods design. The second most dominant design is
survey research (n = 53; 23%, or 22.3% in relation to the entire sample), followed by correlational design
(n = 45; 19.6%, or 18.9% in relation to the entire sample) and ethnographic design (including case study
and netnographic design) (n = 34; 14.8%, or 14.3% in relation to the entire sample) and content/thematic
analysis (n = 18; 7.8%, or 7.6% in relation to the entire sample). These five groups represent 88.7% (n
6
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
= 211) of all the studies reviewed. The remaining 10% can be classified into a miscellaneous group,
including research designs such as grounded theory, narrative research, design-based research,
phenomenological research, action research, and self-study research.
For those studies which involved an intervention/treatment or were length-sensitive, the duration ranged
from as long as six years and four months (Bose, 2021) to about one hour (Arnò et al., 2021; Juarez &
Critchfield, 2021) and even as short as 480 seconds (Stadler et al., 2021). However, 10 studies did not
specify the duration explicitly although it directly impacted the effectiveness of an intervention or the
reliability of survey results. For example, “An online survey was conducted at the end of the semester…
The questionnaire … was distributed to students via the student information system, and data were
surveyed from April 20 to May 10, 2020” (Zheng et al., 2021, p. 43). The length of a semester differs
considerably in different contexts. By the way, given that this study was conducted in China, ‘the end of
the semester’ should be towards the end of June at the earliest. Similarly, some studies used the concept
of course to indicate duration (for example, Heflin & Macaluso 2021), which may also be puzzling
because the duration of a course varies a great deal across different countries or educational institutions,
and even within an institution.
Sampling strategy
It is noteworthy that 62.6% (n = 149) did not explain what sample strategy was used to recruit
participants, instead saying, for example, “a total of 97 students (48 face-to-face and 49 online students)
submitted a Web-based survey at the end of the semesters…” (Lee & Nuatomue, 2021, p. 5) and
“approximately 300 of them responded to the survey…” (Khan et al., 2021, p. 87). As for the 37.4 % (n
= 89) that specifically named the sampling strategies used, only 18% (n = 16) were probabilistic and
80.9% (n = 72) non-probabilistic with one using a mixture of probabilistic and non-probabilistic sampling
(Lee et al., 2021) (Table 5).
Table 5: sampling strategies used (n = 89).
Probabilistic
Sampling
stratified random
sampling
random sampling
cluster random
sampling
probabilitybased sampling
Total
Number
Non-Probabilistic
Number
5
purposive sampling &
snowball
purposive sampling
convenience sampling
3
convenience and
purposive sampling
convenience and
snowball
snowballing
Total
2
7
3
1
16
A mixture of probabilistic and
non-probabilistic sampling
Stratified random sampling and
snowball
Number
Total
1
1
34
26
3
4
72
Data collection
Sources of data were many and varied (n = 339). Note that the number of sources is more than the
number of studies because many studies collected data from more than one source. However, it is worth
noting that some studies did not give the rationale for this mixture. For example, Byrne et al. (2021) used
semi-structured interview and focus group to collect data but did not explain the intended role of each.
It is the same case with Paredes et al. (2021) which collected data via questionnaire and interview.
Data is most frequently collected from surveys using questionnaire, scale or rubric (n = 148; 62.2%) with
more than half of them closed-ended in nature. Interview (n = 73; 30.7%) is the second most frequent
source of data. The other sources are far less significant (see Table 6).
7
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Table 6: Sources of data (n = 339).
Source of data
Questionnaire/scale/rubric (closed-ended)
Questionnaire/scale /rubric (open-ended)
Questionnaire/scale/rubric (closed- and open-ended)
Subtotal
Interview (closed- and open ended question)
Interview (open-ended)
Subtotal
Test score/course grade
Log statistics
Observation
Discussion forum post
Document review
Focus group
Reflection
Course artefact/e-portfolio
Course /activity evaluation
Tweet
Learning journal
Semi-structured discussion
Debrief
Meeting notes
Email/phone call
Vignettes
Administrative data
Logbook
Number
90
14
44
148
1
73
74
26
15
12
11
11
8
8
6
4
3
3
2
2
2
1
1
1
1
Questionnaire/scale/rubric and interview protocol were the two most frequently-used instruments, with
a total number of 280 because many studies employed more than one instrument to collect data. 225
of them (80.4%) were either newly developed or adapted from existing ones by the researcher(s), and
only roughly 20% (n = 55) used existing instruments to collect data. Furthermore, only 13.9% (n = 39)
instruments were both reviewed and piloted and 14.3% (n = 40) were either reviewed or piloted, before
they were put to use in the current study. In other words, over 70% (n = 201) were neither reviewed nor
piloted. Furthermore, less than half of the instruments (n = 124; 44.3%) were available in entirety (see
Table 7). Some studies provided little useful information about the content of the instrument used. For
example, “The tool includes 5 Likert type questions, 8 closed-ended questions - three of which are yesno questions- and 2 open-ended questions” (Karadag et al., 2021, p. 182). It is also worth mentioning
that only the data of the 5 Likert type questions and three yes-no questions were reported in this study.
Van and Thı (2021) claimed to be a mixed-approach study, but the only instrument we know was a 41item-Likert-scale questionnaire. No information about the qualitative part was available.
Table 7: questionnaire/scale/rubric and interview protocol (n = 280).
Instrument
Newly-developed or adapted (reviewed and piloted)
Newly-developed or adapted (reviewed)
Newly-developed or adapted (piloted)
Newly-developed or adapted (neither reviewed nor piloted)
Content availability in entirety
Existing (reviewed and piloted)
Existing (reviewed)
Existing (piloted)
Existing (neither reviewed nor piloted)
Content availability in entirety
Number
37
27
9
152
114
2
1
3
49
10
Availability of specific items/protocols is essential to judging the appropriateness of the instruments
used. For example, Bai and Gu (2021) used the Community of Inquiry Scale (Arbaugh et al., 2008) to
survey a blended course. However, some items of the scale may not be fit for a blended context, for
example, “Getting to know other course participants gave me a sense of belonging in the course” and “I
8
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
was able to form distinct impressions of some course participants”. In Wong and Ghavifekr (2021), “
‘Ethnic Relations’ and ‘Islamic Civilization and Asian Civilization’ are two MOOCs taught as university
general courses at a Malaysian public university, and all undergraduate students at all public universities
are required to take them” (p. 58). In other words, students had no choice but to accept them. Given this
reality, the following survey items may not be appropriate: “I study MOOC because I want to learn about
new subject”, “I study MOOC because I am curious about MOOC”, “I study MOOC because I want to
increase my knowledge on something that I have learned before”, and “I study MOOC because of my
personal challenge” (Wong & Ghavifekr, 2021, p. 59).
Nearly 80% (n = 189) of the studies collected data once at a single point of time, hence cross-sectional
in nature, and a quarter (n=59) did not explain the when, where and how of data collection. For example,
students and teachers were interviewed in Wang et al. (2021). However, it is not known how the
interview was conducted, how the interview data was collected and processed, and what method was
used to analyze the data collected.
Data analysis
Roughly 80% (n = 189) of the studies explained explicitly how data was analysed, but 16.8% were not
very clear in this regard. For example, accounts are unhelpful such as “Thematic analysis was employed
to identify, analyse and report the key patterns in both secondary data and interviews in order to address
the research questions” (Zhu & Chikwa, 2021, p. 9) and “SPSS was used for the statistical analysis,
while the qualitative technique involved identifying codes, themes and sub-themes from the qualitative
data” (Aluko, 2021, p. 24). Furthermore, there are 9 studies (3.8%) that did not mention data analysis at
all.
Researcher bias and ethical concern
Finally, over 90% (n = 218) did not explain how to reduce or avoid researcher bias and about a half
(n=116) did not address potential ethical concerns. Explanation may also be desirable even if a study
causes no ethical issues. Dixon et al. (2021) is a good example which clearly stated that their study
did not require IRB approval because it gathered data that was publicly available, did not
require the researchers to observe, interact, or intervene with individuals to gather the data,
nor did any of its analysis, results, or conclusions utilize any personal identification data.
(Dixon et al., 2021, p. 255)
Another case in point is Azıonya and Nhedzı (2021) who exempted themselves from ethical review and
approval by stating clearly that “no ethical approval was required as all data (tweets) were retrieved from
the public domain and would not constitute an ethical dilemma in internet research” (p. 168).
Limitation
30.7% (n = 73) studies did not acknowledge existence of limitations and their possible influence on
research findings. For those with limitations stated, there are occasional cases when the limitations were
not adequately identified. For example,
The research subjects of this study were students from a university located in an
economically developed area in South China, considering differences in economic
conditions, information technology literacy, academic performance, and the influence of
China’s unique cultural and educational system, this study may have limitations in the
representativeness of the samples and some deficiencies in the adaptability of the
conclusions. (Zhao et al., 2021, p. 94)
9
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Students from a university in Guangdong do not necessarily come from Guangdong. This limitation is
misguiding. Students’ residential locations as well as family socio-economic conditions should have
been included in the survey. Another case in point is Mac Domhnaill et al. (2021). To answer the
research question “Is student access to high-speed broadband and to ICT devices at home associated
with the impact of the shift to distance learning on student engagement?”, the researchers surveyed
“school principals and deputy principals, rather than teachers or students” (Mac Domhnaill et al., 2021,
p. 479). Although it was acknowledged as a limitation and a case was made for this sampling, this is
more than a limitation.
Discussion
“Scientific inquiry, conducted with rigorous attention to correct procedures, is the key to success of this
field” (Simonson et al., 2011, p. 125). Unless rigorously designed, an empirical study may not be able
to bear useful and trustworthy fruit (Azevedo et al., 2011). Findings from this study may reveal, to some
extent, the rigorousness and trustworthiness of empirical research in the field of DE.
Descriptions of the context of study are essential because education is situated and contextual factors
are key to interpreting research findings (Bem, 2004). Overall, this is not a serious problem in our
samples with only about 10% less desirable in this regard. Similarly, clear statements of the purpose,
research questions or hypotheses of an empirical study are necessary in that they “provide critical
information to readers about the direction of a research study…also raise questions that the research
will answer through the data collection process” (Creswell, 2012, p. 109). As such, they also define the
most appropriate participants, source of data, and method of data analysis for the study. Given this
importance, they should be carefully framed to avoid implying researcher bias or misleading assumption,
as is the case with Kasch et al. (2021). Our samples are praiseworthy except for a few unclear accounts
(e.g. Johnson & Barr, 2021; Liu & Shirley, 2021; Van & Thı, 2021) and occasional mismatches between
purpose and participants or sources of data (e.g. Lindecker & Cramer, 2021; Mac Domhnaill et al., 2021;
Zhu & Chikwa, 2021). Another aspect relatively well handled in our samples is duration. For studies
which are sensitive to the length of a treatment/intervention or a phenomenon, the duration should be
explicitly stated because it may affect the interpretation of findings (Cohen et al., 2007; Creswell, 2012).
Only 10 studies were less desirable in this regard.
Research approach and type of design determine how data should be collected, analysed, and
interpreted as well as how the study should be implemented. The most popular approach adopted by
our samples is quantitative (44.5%), followed by mixed (32.4%) and qualitative (23.1%). This finding is
somewhat different from previous studies. In Zawacki-Richter et al. (2009) which reviewed publications
in five DE journals (2000-2008), the order was quantitative (47%), qualitative (32.1%) and mixed
(20.9%), not including conceptual/theoretical research articles. In Bozkurt et al. (2015) which covered
publications from seven DE journals (2009-2013), it was the qualitative approach which came first
(47%), followed by quantitative (37%) and mixed (16%), including conceptual/theoretical research
articles. These differences may be partly due to the differences in the sample inclusion criteria. Both
previous studies did not distinguish primary from secondary data and included meta-analyses while our
study focused on empirical research with primary data. One thing is certain, though, that there was a
significant increase in the use of the mixed approach. This can be seen as a positive response to the cri
de coeur by Garrison and Shale (1994) because this approach integrates advantages and overcome
disadvantages of both quantitative and qualitative methods. However, the drop in qualitative approach
merits our concern because this approach may generate richer and deeper data (Minnes, 1985; Saba
2000).
In terms of design, experimental research has always been less favoured, for example, constituting 6%
in Berge and Mrozowski (2001), 2.5% in Bond et al. (2021), 11% in Bozkurt et al. (2015), and 12% in
10
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Lee et al. (2004). Our study is no exception with only 3.4% (quasi-)experimental design. The scarcity
may be due to the characteristics of education. Unlike many other disciplines, it may be difficult to
conduct educational experiments, if not impossible. However, there are other options that can
compensate for this shortage (Reeves & Lin, 2020). Unfortunately, our findings show that designs such
as action research, designed-based research, narrative research, grounded theory research,
phenomenological research were disproportionately adopted, together representing only 8.3% (n=19),
and so was ethnographic design (n=34; 14.8%). These two proportions were slightly higher in Bozkurt
et al. (2015). Survey research (n = 53) and correlational design (n = 45) accounted for 50% and 42.5%
of the quantitative studies in our samples, or 42.6% of the broader non-experimental category. Unless
more (quasi-)experiments are conducted, this proportion should be reduced to allow for more qualitative
and mixed-methods research. After all, education is not an objectively-measured process or
phenomenon. Obviously, there is a need for a balanced variety of research designs to be employed in
DE research (Minnes, 1985).
Sampling is another issue of concern, with over 60% failing to account for what strategies were used to
recruit sample. Who participated in the study? How were they recruited? Were they the most appropriate
participants for the purpose of the study? These questions not only determine whether data are from
the right sources but also “the limits that are placed on the generalizations that can be made regarding
the study results” (Bulfin et al., 2013, p. 235), hence needing to be adequately addressed. Care should
be taken when judging the generalizability of findings from a study, always bearing in mind not only
whether it employed a sampling strategy but also what strategy was used. Do not take it for granted that
research findings are universally applicable. For example, of the sampling strategies used in our study,
only about 20% were probabilistic in nature, suggesting that findings from the majority of studies were
less generalizable and should be applied elsewhere with caution.
Many studies collected data from more than one source. This may contribute to the reliability and validity
of research findings through triangulation. Nevertheless, a case should be made for using more than
one type of data, explaining the intended role of each in the study. This rationale was missing in some
studies.
Questionnaire/scale/rubric (n = 148; 62.2%) and interview (n = 74; 30.7%) accounted for the data
sources of over 90% sample studies, higher than 72% as found in Bozkurt et al. (2015). Compared with
Bozkurt et al. (2015) and Davies et al. (2010), both of which also identified document analysis,
observation, test scores, and log statistics as sources of data, a greater variety of data was utilized in
our sample, albeit their insignificant share. This is definitely a positive sign and deserves promotion in
future research.
Similarly, many studies employed more than one instrument to collect data. 80% of
questionnaires/scales/rubrics and interview protocols, the two most frequently-used instruments, were
either developed exclusively or adapted from existing ones for the current study. To ensure instrument
appropriateness, it would be desirable to have it evaluated by experts and, more importantly, pilot tested
before it is formally administered in a study (Cohen et al; 2007; Creswell, 2012). Reviewing and piloting
are also required for “old” instruments when they are intended to be used in new contexts and/or among
new participants. Our samples leave much to be desired in this regard, with over 70% of the two most
frequently-used instruments neither reviewed nor piloted. Moreover, the specific content and exact
wording of an instrument may also directly affect its appropriateness. As shown above, if wrong
questions are asked or the wording is biased, the data collected may not be what the participants really
have in mind. Furthermore, a study cannot be replicated or verified unless the instrument used is
presented in its entirety. Regrettably, less than 50% of the two most frequently-used instruments were
available in full, hence undermining the value of research findings. Administration of data collection is
another issue that may jeopardize methodological rigor. Therefore, it is important to state clearly when,
where, how, and by whom data was collected. A quarter of our samples did not provide this information.
11
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Other issues that impact negatively on research rigorousness include the ways data is analysed,
researcher bias avoided or reduced, and ethical concerns addressed (Chenail, 2010; Jack et al., 2010).
Researchers’ positionality may affect their neutrality throughout the research process, their relationship
with the participants, and the interpretation of findings (Werth & Williams, 2021). How to ensure “pursuit
of truth” and protection of participants’ “rights and values potentially threatened by the research” is
always an ethical concern for education researchers (Cohen et al., 2007, p. 51), which, unless properly
addressed, may lead to resistance from participants and consequently data inaccuracy. Methods of data
analysis were clearly explained in over 80% of our sample studies. However, researcher bias and ethical
concerns were ignored in over 90% and 50% of the studies, respectively. Last but not least, about 80%
of the studies were cross-sectional. This is a “long-established” weakness of educational research and
should be taken seriously. For example, 92.9% studies were cross-sectional in Bond et al. (2021).
All research is flawed one way or another. Hence, limitations, that is, potential weaknesses or problems
related to the design and implementation of a study, are part and parcel of any empirical study, be it
quantitative, qualitative or mixed (CASP, 2006; Creswell, 2012). Proper acknowledgement of limitations
may help understand research findings more accurately and even serve as directions for further
research. The scenario is less optimistic with 30% of the sample studies failing to mention their
limitations, a defect that should be avoided in future research.
Conclusions and Implications
Lack of methodological rigor has long plagued the field of DE (Panda, 1992; Simonson et al., 2011).
Our research indicates that the situation continues to be less ideal, to different extents in different
aspects, especially in terms of research approach and design, sampling, data source, and possible
weaknesses such as limitations, researcher bias and ethical concerns (see Table 8 for a summary of
main findings).
Table 8: Main findings.
Percentage
Context of study specified (n = 238)
Hypothesis, research question or purpose specified (n = 238)
Types of approach used (n = 238)
Types of design used (n = 238)
Sampling strategy specified (n = 238)
Types of sampling strategy (n = 89)
Data collection (n = 238)
Data analysis (n = 238)
92.4%
95.8%
Quantitative
Qualitative
Mixed
Experimental
Non-experimental
Duration required
and specified
37.4%
probabilistic
non-probabilistic
Mixture
Most frequent
sources of data
Administration
specified
Cross-sectional
Longitudinal
Clearly specified
Somewhat specified
44.5%
23.1%
32.4%
3.4%
Survey design
Correlational design
Mixed-methods
Ethnographic design
Content/thematic
analysis design
Miscellaneous
90.4%
18%
80.9%
1.1%
Questionnaire/
scale/rubric
Interview
75.2%
22.3%
18.9%
25.6%
14.3%
7.6%
8%
62.2%
30.7%
79.4%
20.6%
79.4%
16.8%
12
Asian Journal of Distance Education
Researcher bias addressed (n = 238)
Ethical issue addressed (n = 238)
Limitation acknowledged (n = 238)
Peng, Y., & Xiao, J.
Not known
8.4%
51.3%
69.3%
3.4%
Findings from this study carry implications for future research. First, as a human enterprise, education
is most often the result of interplay between numerous factors that can be very difficult or even
impossible to be objectively identified and measured, hence a need for more qualitative and mixedmethods research to enable more insights into educational events and phenomena. Second, despite
the difficulties in conducting (quasi-)experiments in education, especially in distance education, the
insignificant presence of this research design needs to be significantly enhanced. So is the case with
numerous other types of research design which are “out of favour” but probably a better fit for
educational research, given the nature and characteristics of education mentioned above. Included in
this group are action research, design-based research, narrative research, phenomenological research
and grounded theory research. Third, the importance of sampling cannot be over-emphasized. Given
that over 60% studies failed to explain their sampling strategies and that only about one-fifth named
strategies were probabilistic, research findings should be interpreted and generalized with great caution.
This situation should be changed. Similarly, future research should avoid over-relying on
questionnaire/scale/rubric and interview to collect data. This may be due to the tendency to favour
correlational and survey research over other types of research design. As indicated in the findings, there
is a great variety of data that can be used for educational research. Fourth, no research is immune to
potential weaknesses or problems, including researcher bias, ethical issues and other limitations. Future
research needs to pay more attention to these issues. Finally, cross-sectional research is a one-off and
isolated in nature while education is dynamic and ever-changing. Therefore, more longitudinal research
is needed to generate insightful findings. Equally needed are replication, i.e. repeating a previous study
to verify its findings, and extension, i.e. building on the findings of a previous study with the aim of
refining, enriching and/or expanding its findings, given the situatedness of education. It is worth noting
that our samples include only one replication study and 12 extension studies, although they do not fall
within the aims and scope of the current study.
Findings from this study also have implications for reviewers and editors who are gatekeepers with great
responsibilities for the healthy and sustainable growth of our field. All reviewers deserve respect for their
altruistic contributions. However, it is hoped that reviewers accept only requests for review which are
their areas of expertise and assess a study and give constructive feedback in line with the established
criteria of good research. Reviewers may be regarded as the keepers of the first “gate” while editors
play a decisive role in the publication of an article. Needless to say, editors have to balance the needs,
wants, and demands of related parties, withstanding pressures from various sources. There is no
denying that “a lot of factors contribute to the publication of an article”, as admitted by an editor.
Nevertheless, the principle of quality assurance should never be compromised. If the defects can be
remedied, editors should insist on having them corrected by providing necessary guidance and
assistance to the author(s), for example, describing the context in clearer terms or reflecting on
limitations of the study. If the errors are irreversible, the only solution is to ask the author(s) to start
afresh from where they went wrong, in the right way. No compromise should be allowed in this case.
13
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
References
Aluko, F. R. (2021). Evaluating student support provision in a hybrid teacher education programme using
Tait’s
framework
of
practice.
Open
Praxis,
13(1),
21–35.
http://doi.org/10.5944/openpraxis.13.1.1171
Anderson, T. (2003). Getting the mix right again: An updated and theoretical rationale for interaction.
The International Review of Research in Open and Distance Learning, 4(2).
http://www.irrodl.org/index.php/irrodl/article/view/149/230
Arbaugh, J., Cleveland-Innes, M., Diaz, S. R., Garrison, D. R., Ice, P., Richardson, J. C., & Swan, K. P.
(2008). Developing a community of inquiry instrument: Testing a measure of the Community of
Inquiry framework using a multi-institutional sample. The Internet and Higher Education, 11(34), 133–136. http://doi.org/10.1016/j.iheduc.2008.06.003
Arnò, S., Galassi, A., Tommasi, M., Saggino, A., & Vittorini, P. (2021). State-of-the-art of commercial
proctoring systems and their use in academic online exams. International Journal of Distance
Education Technologies (IJDET), 19(2), 55-76. http://doi.org/10.4018/IJDET.20210401.oa3
Azevedo, L. F, Canário-Almeida, F., Almeida Fonseca, J., Costa-Pereira, A., Winck, J. C., & Hespanhol,
V. (2011). How to write a scientific paper: Writing the methods section. Revista Portuguesa de
Pneumologia (English Edition), 17 (5), 232-238. https://doi.org/10.1016/j.rppnen.2011.06.008
Azıonya, C. M., & Nhedzı, A. (2021). The digital divide and higher education challenge with emergency
online learning: Analysis of tweets in the wake of the COVID-19 lockdown. Turkish Online
Journal of Distance Education, 22 (4), 164-182. http://doi.org/10.17718/tojde.1002822
Bai, X., & Gu, X. (2021). Group differences of teaching presence, social presence, and cognitive
presence in a xMOOC-based blended course. International Journal of Distance Education
Technologies (IJDET), 19(2), 1-14. http://doi.org/10.4018/IJDET.2021040101
Bem, D. J. (2004). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. L. Roediger
III (Eds.), The compleat academic: A career guide (pp. 185–219). American Psychological
Association.
Berge, Z. L., & Mrozowski, S. (2001). Review of research in distance education, 1990 to 1999. American
Journal of Distance Education, 15(3), 5–19. http://doi.org/10.1080/08923640109527090
Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., & Bethel, E.
C. (2009). A meta-analysis of three types of interaction treatments in distance education.
Review of Educational Research, 79, 1243–1289. http://doi.org/10.3102/0034654309333844
Bishop, J. S., & Spake, D. F. (2003). Distance education: A bibliographic review for educational planners
and policymakers 1992–2002. Journal of Planning Literature, 17(3), 372–391.
http://doi.org/10.1177/0885412202239139
Bond, M., Bedenlier, S., Marín, V. I., & Händel, M. (2021). Emergency remote teaching in higher
education: Mapping the first global online semester. International Journal of Educational
Technology in Higher Education, 18, 50. http://doi.org/10.1186/s41239-021-00282-x
Bose, S. (2021). Using grounded theory approach for examining the problems faced by teachers
enrolled in a distance education programme. Open Praxis, 13(2), 160–171. http://doi.org/
http://doi.org/10.5944/openpraxis.13.2.128
Bozkurt, A. (2019). Intellectual roots of distance education: A progressive knowledge domain analysis.
Distance Education, 40(4), 497-514. http://doi.org/10.1080/01587919.2019.1681894
Bozkurt, A., Akgun-Ozbek, E., Yilmazel, S., Erdogdu, E., Ucar, H., Guler, E., Sezgin, S., Karadeniz, A.,
Sen-Ersoy, N., Goksel-Canbek, N., Dincer, G. D., Ari, S., & Aydin, C. H. (2015). Trends in
distance education research: A content analysis of journals 2009-2013. The International
Review
of
Research
in
Open
and
Distributed
Learning,
16(1).
https://doi.org/10.19173/irrodl.v16i1.1953
Bozkurt, A., & Zawacki-Richter, O. (2021). Trends and patterns in distance education (2014–2019): A
synthesis of scholarly publications and a visualization of the intellectual landscape. The
International Review of Research in Open and Distributed Learning, 22(2), 19-45.
https://doi.org/10.19173/irrodl.v22i2.5381
14
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Bulfin. S., Henderson, M., & Johnson, N. (2013). Examining the use of theory within educational
technology and media research. Learning, Media and Technology, 38(3), 337-344,
http://doi.org/10.1080/17439884.2013.790315
Byrne, V. L., Hogan, E., Dhingra, N., Anthony, M., & Gannon, C. (2021). An exploratory study of how
novice instructors pivot to online assessments strategies. Distance Education, 42(2), 184-199.
http://doi.org/10.1080/01587919.2021.1911624
Çakiroğlu, Ü., Kokoç, M., Gökoğlu, S., Öztürk, M., & Erdoğdu, F. (2019). An analysis of the journey of
open and distance education: Major concepts and cutoff points in research trends. The
International Review of Research in Open and Distributed Learning, 20(1).
https://doi.org/10.19173/irrodl.v20i1.3743
Cathala, X., & Moorley, C. (2019). How to appraise quantitative research. Evid Based Nurs, 21 (4), 99–
101. http://doi.org/10.1136/eb-2018-102996.
Chenail, R. J. (2010). Introduction to qualitative research design. https://silo.tips/download/introductionto-qualitative-research-design-ronald-j-chenail-phd-nova-southeaste#modals
Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education (6th ed.). Routledge.
Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative and
qualitative research (4th ed.). Pearson.
Critical Appraisal Skills Programme (CASP). (2006). 10 questions to help you make sense of qualitative
research.
Public
Health
Resource
Unit.
http://www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf
Davies, R., Howell, S., & Petrie, J. (2010). A review of trends in distance education scholarship at
research universities in North America, 1998-2007. The International Review of Research in
Open and Distance Learning, 11(3), 42-56. https://doi.org/10.19173/irrodl.v11i3.876
Dixon, Z., Whealan George, K., & Carr, T., (2021). Catching lightening in a bottle: Surveying plagiarism
futures. Online Learning, 25 (3), 249-266. https://doi.org/10.24059/olj.v25i3.2422
Drummond, K. E., & Murphy-Reyes, A. (2018). Nutrition research concepts and applications. Jones &
Bartlett Learning.
Garrison. D. R., & Shale, D. (1994). Methodological issues: Philosophical differences and
complementary methodologies. In D. R. Garrison (Ed.), Research perspectives in adult
education (pp. 17-37). Krieger.
Gomes, R. R., & Barbosa, M. W. (2018). An analysis of the structure and evolution of the distance
education research area community in terms of coauthorships. International Journal of Distance
Education Technologies (IJDET), 16(2), 65–79. https://doi.org/10.4018/IJDET.2018040105
Heflin, H, & Macaluso, S. (2021). Student initiative empowers engagement for learning online. Online
Learning, 25(3), 230-248. https://doi.org/10.24059/olj.v25i3.2414
Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative
Health Research, 15(9), 1277–1288. https://doi.org/10.1177/1049732305276687
Jack, L., Jr, Hayes, S. C., Scharalda, J. G., Stetson, B., Jones-Jack, N. H., Valliere, M., Kirchain, W. R.,
& LeBlanc, C. (2010). Appraising quantitative research in health education: Guidelines for public
health
educators.
Health
promotion
practice,
11(2),
161–165.
https://doi.org/10.1177/1524839909353023
Johnson, J.E. & Barr, N.B. (021). Moving hands-on mechanical engineering experiences online: Course
redesigns
and
student
perspectives.
Online
Learning,
25(1),
209-219.
https://doi.org/10.24059/olj.v25i1.2465
Juarez, B. C., & Critchfield, M. (2021) Virtual classroom observation: Bringing the classroom experience
to pre-service candidates. American Journal of Distance Education, 35(3), 228-245.
https://doi.org/10.1080/08923647.2020.1859436
Karadag, N., Boz Yuksekdag, B., Akyıldız,M. & Ibıleme, A. I. (2021). Assessment and evaluation in open
education system: Students’ opinions about open-ended question (OEQ) practice. Turkish
Online Journal of Distance Education, 22 (1), 179-193. https://doi.org/10.17718/tojde.849903
Kasch, J., van Rosmalen, P., Löhr, A. , Klemke, R., Antonaci, A., & Kalz, M. (2021). Students’
perceptions of the peer-feedback experience in MOOCs. Distance Education, 42(1), 145-163.
https://doi.org/10.1080/01587919.2020.1869522
15
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Khan, A. U., Khan, K. U., Atlas, F., Akhtar, S. & Khan, F. (2021). Critical factors influencing MOOCs
retention: The mediating role of information technology. Turkish Online Journal of Distance
Education, 22 (4), 82-101. https://doi.org/10.17718/tojde.1002776
Kohnke, L. (2021). Professional development and ICT: English language teachers’ voices. Online
Learning, 25(2), 36-53. https://doi.org/10.24059/olj.v25i2.2228
Lee, Y., Driscoll, M. P., & Nelson, D. W. (2004). The past, present, and future of research in distance
education: Results of a content analysis. The American Journal of Distance Education, 18(4),
225–241. https://doi.org/10.1207/s15389286ajde1804_4
Lee, S. J., & Nuatomue, J. N. (2021). Students' perceived difficulty and satisfaction in face-to-face vs.
online sections of a technology-intensive course. International Journal of Distance Education
Technologies (IJDET), 19(3), 1-13. http://doi.org/10.4018/IJDET.2021070101
Lee, J., Soleimani, F., & Harmon, S. W. (2021). Emergency move to remote teaching: A mixed-method
approach to understand faculty perceptions and instructional practices. American Journal of
Distance Education, 35(4). 259-275, https://doi.org/10.1080/08923647.2021.1980705
Lindecker, C., & Cramer, J. (2021). Self-disclosure and faculty compassion in online classrooms. Online
Learning, 25(3), 144-156. https://doi.org/10.24059/olj.v25i3.2347
Liu, Y., & Shirley, T. (2021). Without crossing a border: Exploring the impact of shifting study abroad
online on students’ learning and intercultural competence development during the COVID-19
pandemic. Online Learning, 25(1), 182-194. https://doi.org/10.24059/olj.v25i1.2471
Mac Domhnaill, C., Mohan, G., & McCoy, S. (2021). Home broadband and student engagement during
COVID-19 emergency remote teaching. Distance Education, 42(4), 465-493.
https://doi.org/10.1080/01587919.2021.1986372
Martínez, R. A., & Anderson, T. (2015). Are the most highly cited articles the ones that are the most
downloaded? A bibliometric study of IRRODL. The International Review of Research in Open
and Distributed Learning, 16(3). https://doi.org/10.19173/irrodl.v16i3.1754
Minnes, J. R. (1985). Ethnography, case study, grounded theory, and distance education research.
Distance Education, 6(2), 189-198. https://doi.org/10.1080/0158791850060204
Panda, S. (1992). Distance educational research in India: Stock-taking, concerns and prospects.
Distance Education, 13(2), 309–26. https://doi.org/10.1080/0158791920130211
Paredes, S. G., de Jesús Jasso Peña, F., & de La Fuente Alcazar, J. M. (2021). Remote proctored
exams: Integrity assurance in online education?. Distance Education, 42(2), 200-218.
https://doi.org/10.1080/01587919.2021.1910495
Penn State University Libraries. (2021). Empirical research in the social sciences and education.
https://guides.libraries.psu.edu/emp
Reeves, T., & Lin, L. (2020). The research we have is not the research we need. Education Tech
Research Dev, 68, 1991–2001. https://doi.org/10.1007/s11423-020-09811-3
Russell, T. L. (1999). The no significant difference phenomenon. North Carolina State University, Office
of Instructional Telecommunication.
Saba, F. (2000). Research in distance education: A status report. International Review of Research in
Open and Distance Learning, 1(1). https://doi.org/10.19173/irrodl.v1i1.4
Simonson, M., Schlosser, C., & Orellana, A. (2011). Distance education research: A review of the
literature.
Journal
of
Computing
in
Higher
Education,
(23),
124–42.
https://doi.org/10.1007/s12528-011-9045-8.
Stadler, M., Kolb, N., & Sailer, M. (2021). The right amount of pressure: Implementing time pressure in
online
exams.
Distance
Education,
42(2),
219-230.
https://doi.org/10.1080/01587919.2021.1911629
Van, D. T. H., & Thı, H. H. Q. (2021). Student barriers to prospects of online learning in Vietnam in the
context of COVID-19 pandemic. Turkish Online Journal of Distance Education, 22 (3), 110-123.
https://doi.org/10.17718/tojde.961824
Wang, Y., Stein, D., & Shen, S. (2021). Students’ and teachers’ perceived teaching presence in online
courses. Distance Education, 42(3), 373-390. https://doi.org/10.1080/01587919.2021.1956304
16
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Werth, E., & Williams, K. (2021). What motivates students about open pedagogy? Motivational
regulation through the lens of self-determination theory. The International Review of Research
in Open and Distributed Learning, 22(3), 34-54. https://doi.org/10.19173/irrodl.v22i3.5373
Wong, S. Y., & Ghavifekr, S. (2021). Exploring the relationship between MOOC resource management
and students' perceived benefits and satisfaction via SEM. International Journal of Distance
Education Technologies (IJDET), 19(3), 51-69. http://doi.org/10.4018/IJDET.2021070104
Xiao, J. (2017). Learner-content interaction in distance education: The weakest link in interaction
research. Distance Education, 38(1), 123-135. https://doi.org/10.1080/01587919.2017.1298982
Zawacki-Richter, O., Alturki, U., & Aldraiweesh, A. (2017). Review and content analysis of the
International Review of Research in Open and Distance/Distributed Learning (2000–2015). The
International Review of Research in Open and Distributed Learning, 18(2).
https://doi.org/10.19173/irrodl.v18i2.2806
Zawacki-Richter, O., & Anderson, T. (2011). The geography of distance education-bibliographic
characteristics
of
a
journal
network.
Distance
Education,
32,
441–456.
https://doi.org/10.1080/01587919.2011.610287
Zawacki-Richter, O., Baecker, E. M., & Vogt, S. (2009). Review of distance education research (2000
to 2008): Analysis of research areas, methods, and authorship patterns. The International
Review
of
Research
in
Open
and
Distributed
Learning,
10(6),
21-50.
https://doi.org/10.19173/irrodl.v10i6.741
Zawacki-Richter, O., & Bozkurt, A. (2022). Research trends in open, distance, and digital education. In
O. Zawacki-Richter & I. Jung (Eds.), Handbook of open, distance and digital education.
Springer. https://doi.org/10.1007/978-981-19-0351-9_12-1
Zawacki-Richter, O., & Naidu, S. (2016). Mapping research trends from 35 years of publications in
Distance
Education.
Distance
Education,
37(3),
245-269.
https://doi.org/10.1080/01587919.2016.1185079
Zawacki-Richter, O., & von Prümmer, C. (2010). Gender and collaboration patterns in distance
education
research.
Open
Learning,
25(2),
95–114.
https://doi.org/10.1080/02680511003787297
Zheng, H., Jiang, H., Wang, J., & Liu, S. (2021). The Determinants of student attitude toward E-Learning
academic achievement during the COVID-19 pandemic. International Journal of Distance
Education Technologies (IJDET), 19(4), 1-17. http://doi.org/10.4018/IJDET.286740
Zhao, L., Hwang, W., & Shih, T. K. (2021). Investigation of the physical learning environment of distance
learning under COVID-19 and its influence on students' health and learning satisfaction.
International Journal of Distance Education Technologies (IJDET), 19(2), 77-98.
http://doi.org/10.4018/IJDET.20210401.oa4
Zhu, X., & Chikwa, G. (2021). An exploration of China-Africa cooperation in higher education:
Opportunities and challenges in open distance learning. Open Praxis, 13(1), 7–19.
http://doi.org/10.5944/openpraxis.13.1.1154
About the Author(s)
•
•
Yiwei Peng; diana66912@yahoo.com; the Open University of Shantou, China;
https://orcid.org/0000-0001-5341-2568
Junhong Xiao (Corresponding author); frankxjh@outlook.com; the Open University of
Shantou, China; https://orcid.org/0000-0002-5316-2957
Author’s Contributions (CRediT)
Yiwei Peng: Conceptualization, Investigation, Formal analysis, Writing – original draft; Junhong Xiao:
Conceptualization, Methodology, Supervision, Writing – review & editing
Acknowledgements
Not applicable
Funding
Not applicable.
17
Asian Journal of Distance Education
Peng, Y., & Xiao, J.
Ethics Statement
No ethical approval was needed because this study involved publicly-available journal articles only.
Conflict of Interest
The authors do not declare any conflict of interest.
Data Availability Statement
All data generated or analyzed during this study are included in this published article.
Suggested citation:
Peng, Y., & Xiao, J. (2022). Is the empirical research we have the research we can trust? A review of
distance education journal publications in 2021. Asian Journal of Distance Education, 17(2), 1-18.
https://doi.org/10.5281/zenodo.7006644
Authors retain copyright. Articles published under a Creative Commons Attribution 4.0 (CC-BY) International License.
This licence allows this work to be copied, distributed, remixed, transformed, and built upon for any purpose provided
that appropriate attribution is given, a link is provided to the license, and changes made were indicated.
18