|
Bulletin of the Technical Committee on Learning Technology (ISSN: 2306-0212) |
Authors:
Abstract:
The topic of generative artificial intelligence, including ChatGPT, has drawn significant attention from scholars and the media. Nonetheless, there exists a need to better understand students usage of ChatGPT and the potential implications, constructive and adverse, of its utilization. This study examined the reasons for and concerns of ChatGPT usage between university students based on evidence from two separate investigations. The initial study used an eight-item scale to measure ChatGPT use by 165 students from a university. The second study utilized a three-wave time-lagged data to gather data on 494 students from a university, and the validity of the scale was affirmed and the study’s assumptions tested. The second study further explored the impact of academic pressure, time urgency, incentive sensitivity, and quality sensitivity on the use of ChatGPT. It also examined whether ChatGPT use affected students’ adjournment, memory performance, and academic performance. The initial study provided strong evidence that the scale of ChatGPT use was valid and reliable. In addition, the second investigation determined that students were more apt to make use of ChatGPT when there was a heightened level of academic pressure and time constraint. Students with stronger sensitivity to incentives were less inclined to use ChatGPT. As hypothesized, ChatGPT was associated with higher levels of deferment and forgetting, which led to diminished academic achievement. Lastly, task burden, pressure of time, and reward sensitivity had an indirect impact on the outcomes of the students via their use of ChatGPT.
Keywords: Delaying tasks, Educational achievement, Incentives, Quality standards, Time constraints, Utilization of ChatGPT, Workload
I. INTRODUCTION
The ChatGPT application has caused quite a stir with teachers and academics globally, particularly on questions of fraud and plagiarism. Sciences Po’s spokesman informed Reuters of the concerns. In an interview with EduKitchen, Noam Chomsky, a renowned figure in contemporary linguistics, affirmed, “I think it undermines education by encouraging high-tech plagiarism and evading real learning.”. Over the past few years, generative artificial intelligence (AI) has been influential in higher education, with OpenAI’s ChatGPT (2022) being used extensively in educational institutions for multiple purposes like code or text generation, research help, and assignment, essay, and project completion (Bahroun et al., 2023; Stojanov, 2023; Strzelecki, 2023). ChatGPT facilitates students to address their issues in a concise and contextually apt way, rendering it a highly valuable resource for their academic pursuit. Nevertheless, the extensive usage of ChatGPT presents multiple challenges for higher education (Bahroun et al., 2023; Chan, 2023; Chaudhry et al., 2023; Dalalah & Dalalah, 2023). Scholars have suggested that the use of ChatGPT could have several adverse consequences for students (Chan, 2023; Dalalah & Dalalah, 2023; Dwivedi et al., 2023; Lee, 2023). It can potentially have a negative impact on students’ learning and achievement (Korn & Kelly, 2023; Novak, 2023), and also erode their academic integrity (Chaudhry et al., 2023). This absence of academic integrity is detrimental to the integrity of higher learning institutions (Macfarlane et al., 2014) and discourages students from succeeding (Krou et al., 2021).
However, while ChatGPT’s popularity in higher education continues to grow, there is little empirical research on the determinants of its uptake among university students (Strzelecki, 2023). Much of previous work involves theoretical arguments, remarks, interviews, reviews, or editorials about ChatGPT’s application in academia (e.g., Cooper, 2023; Cotton et al., 2023; Dwivedi et al., 2023; King, 2023; Peters et al., 2023). Consequently, we still lack a good understanding of the primary motivations behind university students’ use of ChatGPT and its impact on their personal and academic achievement. There has been scant empirical research on the impact of generative AI, like ChatGPT, on the academic and personal outcomes of students. Current research presents mixed results regarding whether ChatGPT is helpful or harmful. Therefore, further research needs to be carried out on the role of generative AI within higher education. The reasons for the use of ChatGPT and the potential consequences are central to understanding by educators, policymakers, and learners in formulating effective means through which AI technology can be used in learning environments. Researchers have emphasized that greater research needs to be conducted in order to reveal both the good and evil factors of ChatGPT in higher education. The present work comprises many aims focused on overcoming present gaps and providing a noteworthy contribution to higher education knowledge and practice.
First, following prior studies on developing the ChatGPT use scale (Paul et al., 2023), we develop and test a ChatGPT use scale in study 1. In research 2, we examine various theoretically significant factors, including academic workload, time stress, incentive sensitivity, and quality sensitivity, which can affect university students’ utilization of ChatGPT. There have been concerns regarding how ChatGPT affects students’ learning and creativity. Although certain researchers are convinced that the utilization of ChatGPT is damaging to social understanding of knowledge and learning (Peters et al., 2023, p. 142) and risks suppressing creativity and critical thinking (Dwivedi et al., 2023, p. 25), there is little empirical proof of the benefits or detriments of utilizing ChatGPT. The manuscript should interrogate why such an essential area remains underexplored. Possible constraints could include ethical concerns over student data usage and AI agency, institutional hesitation in adopting AI-based tools in pedagogy, nascent stage of GAI technologies in academic settings and methodological complexity in measuring nuanced outcomes. It demonstrate how the current research navigates and overcomes these challenges. This transforms the study from merely being the first into one that is both pioneering and instructive for future research design. Regardless of the scenario, the manuscript should explicitly articulate its research positioning in light of prior work or the absence thereof, justify its methodological choices as responsive to historical gaps or research hesitations and communicate its academic novelty not only through topic selection but via critical engagement with scholarly silence or fragmentation in the field. Consequently, we explore the effects of using ChatGPT on students’ procrastination, loss/remembrance of memory, and educational performance (CGPA). This research aims to yield meaningful recommendations for teachers, policymakers, and students by shedding light on the reasons why students are driven to use ChatGPT and the potential advantages or disadvantages of doing so in institutions of higher learning.
II. LITERATURE AND HYPOTHESES
A. Publication overview and the top 10 cited papers
Define abbreviations and acronyms the first time they are used in the text, even after they have already been defined in the abstract. Abbreviations such as IEEE, SI, ac, and dc do not have to be defined. Abbreviations that incorporate periods should not have spaces: write “C.N.R.S.,” not “C. N. R. S.” Do not use abbreviations in the title unless they are unavoidable (for example, “IEEE” in the title of this article). Hypothesis 1 Workload will be positively related to the use of ChatGPT.
B. Time pressure and use of ChatGPT
Studies have shown that individuals tend to rely on simple heuristics under time pressure to complete tasks. This may make students perceive that they do not have sufficient time to do assignments, which will lead them to turn to technologies such as ChatGPT for assistance. The studies have also indicated that time constraint may boost the chance of plagiarism among pupils. Additionally, students may choose to cheat and plagiarize in order to achieve deadlines due to time constraints. It has been discovered that under pressure, students adopt a surface-learning strategy and use ChatGPT and other shortcuts to finish projects on time. Therefore, one may conclude that students are more likely to choose ChatGPT for their studies under high-pressure situations. As a result, it is recommended that: Hypothesis 2 Time pressure will be positively related to the use of ChatGPT.
C. Sensitivity to rewards and use of ChatGPT
Sensitivity to incentives means a student’s stress or nervousness concerning reward in terms of academic accomplishment, like marks. Past research has not been clear about prediction concerning the connection between reward sensitivity and the use of ChatGPT. In contrast, more incentive-dependent students may be likely to use ChatGPT, as they might use it to further their study performance and earn higher marks. They can view ChatGPT as a tool that could assist them in enhancing their overall academic results. Research says that highly reward-sensitive or impulsivity individuals will be more prone to engage in risky behaviors like texting while driving. This illustrates that reward-sensitive students can engage in risky actions, like using ChatGPT for academic purposes or plagiarism. Conversely, highly reward-sensitive students who are extremely concerned about their awards might choose not to use ChatGPT for fear of jeopardizing their scores. Since using ChatGPT for academic work is normally seen as immoral, highly reward-sensitive individuals might be less likely to use technologies that are viewed by their professors as being morally questionable and could potentially undermine their academic standards and grades. Therefore, we propose competing hypotheses. Hypothesis 3a Sensitivity to rewards will be positively related to the use of ChatGPT. Hypothesis 3b Sensitivity to rewards will be negatively related to the use of ChatGPT.
D. Sensitivity to quality and use of ChatGPT
Quality-conscious students are more likely to employ multiple techniques in order to enhance the quality of their academic work. ChatGPT can prove to be an effective tool for quality-conscious students for a number of reasons. Such students aim for perfection, accuracy, and dependability in their work and recognize the value of employing ChatGPT in order to meet their high standards (Haensch et al., 2023; Yan, 2023). People with a fine sensitivity for quality value style, grammar, and linguistic accuracy very much. ChatGPT is able to assist writers in refining their writing by making recommendations on sentence formation, word choice, and grammar (Abbas, 2023; Dwivedi et al., 2023). Therefore, students who value quality are more likely to use ChatGPT to enhance the quality of their academic work, e.g., assignments, projects, essays, or presentations, compared to their peers who do not have the same quality awareness. Consequently, it is recommended that: Hypothesis 4 Sensitivity to quality will be positively related to the use of ChatGPT.
E. Use of ChatGPT and procrastination
Procrastination refers to willfully delaying a desired action while knowing that it would lead to negative outcomes (Steel, 2007, p. 66). Other individuals are chronic procrastinators, while some delay in specific situations (Rozental et al., 2022). Academic procrastination, as the regular postponing of academic work to such an extent that performance is affected, is one of the significant issues for students and schools (Svartdal and Lokke, 2022). From studies, it has been reported that procrastination is common among students (Baulke & Dresel, 2023), and it can be influenced by an array of environmental and individual factors (Liu et al., 2023; Steel, 2007). Our point is that utilizing generative AI can minimize procrastination among students. Through giving students shortcuts where they can finish study work with less effort, they might be dependent on such tools. As a result, students who are excessively dependent on software such as ChatGPT will think that they can complete tasks quickly and efficiently and hence keep delaying tasks till the last minute. This habit will likely create procrastination. Current research also indicates that using ChatGPT might make students lazy (Yilmaz & Yilmaz, 2023a). Therefore, we recommend: Hypothesis 5 Use of ChatGPT will be positively related to procrastination.
F. Use of ChatGPT and memory loss
Memory loss is a state where individuals are unable to remember prior knowledge or experiences. It is influenced by cognitive, emotional, and physical factors. Students who overdepend on ChatGPT for studies can suffer from memory loss. This is due to the fact that the prolonged use of ChatGPT can encourage complacency and erode cognitive functions, compromising memory. Rather than engaging in critical thinking and mental work, students can become excessively dependent on generative AI technologies, which can have a negative effect on memory retention, cognitive performance, and critical thinking abilities. Active learning, which involves active cognitive engagement, is necessary for memory consolidation and retention. But if students rely too heavily on ChatGPT, they can reduce their cognitive efforts, resulting in poorer memory. Studies indicate that daily mental training and certain exercises such as quick simple numerical calculation training can enhance cognitive abilities, such as working memory and processing speed. Hence, the heavy usage of ChatGPT without performing such cognitive training may lead to memory loss in students. Therefore, it is advised that students balance the use of AI tools and engage in cognitive activities with active participation to ensure that their memory and intellectual capacities are not compromised. Consequently, we suggest: Hypothesis 6 Use of ChatGPT will be positively related to memory loss.
G. Use of ChatGPT and academic performance
Academic achievement indicates the success of a student in his studies. CGPA is an objective academic achievement that shows an individual’s overall success in a given period, typically a semester. ChatGPT may assist a student in understanding a topic better and enhancing academic performance. Nevertheless, relying solely on ChatGPT without making the necessary effort towards critical thinking and independent study might result in poor academic achievement. Relying mainly on external sources, i.e., AI tools, without taking part and being engaged in learning, could hinder the formation of critical skills and the level of information necessary for success at school. Consequently, students who heavily depend on ChatGPT could have bad academic performance. Therefore, it is recommended that students: Hypothesis 7 Use of ChatGPT will be negatively related to academic performance.
H. The mediating role of ChatGPT usage
We contend that ChatGPT can act as a mediator between workload, time pressure, sensitivity to quality, and sensitivity to incentives, and their impact on student outcomes. Students, especially those with heavy workloads and little time to finish their academic tasks, might resort to ChatGPT to deal with such stressful circumstances. Consequently, relying heavily on ChatGPT can lead to delays in completing tasks (i.e., procrastination) because students feel they can complete the activities with ease anytime. In addition, highly relying on ChatGPT to replace critical thinking and problem-solving skills can hinder the acquisition of a deeper understanding of the content, leading to low academic performance (Abbas, 2023). In addition, heavy reliance on ChatGPT for academic purposes can lead to reduced mental engagement, increasing the likelihood of memory decline (Bahrini et al., 2023; Dwivedi et al., 2023). In addition, ChatGPT can influence the connections between reward sensitivity, quality sensitivity, and the prevalence of procrastination, memory loss, and academic achievement. The anxiety of losing grades (reward sensitivity) and the focus on academic work quality (quality sensitivity) could influence the implementation of ChatGPT. Consequently, overuse or underuse of ChatGPT could raise students’ procrastination, lead to memory loss, and ultimately negatively affect their academic performance. In summary, our suggestion is that: Hypothesis 8 Use of ChatGPT will mediate the relationships of workload with procrastination, memory loss, and academic performance. Hypothesis 9 Use of ChatGPT will mediate the relationships of time pressure with procrastination,memory loss, and academic performance. Hypothesis 10 Use of ChatGPT will mediate the relationships of sensitivity to rewards with procrastination, memory loss, and academic performance. Hypothesis 11 Use of ChatGPT will mediate the relationships of sensitivity to quality with procrastination, memory loss, and academic performance.
Use of Literature Review to Establish Hypotheses, While it is standard practice to develop hypotheses from existing literature, it is essential to Clearly delineate how specific findings or theoretical frameworks inform each hypothesis, Explain the logical flow from literature insights to proposed research dimensions and Justify the relevance and contemporaneity of the selected references. In this manuscript, hypotheses are built around ChatGPT usage patterns, referencing Paul et al. (2023). However, deeper justification is necessary to solidify this choice and differentiate it scientifically. Justification for Selecting Paul et al. (2023) to strengthen this section, the manuscript should: Why was Paul et al. (2023) prioritized over other studies? Is it due to the methodological rigor, sample diversity, relevance to current AI integration trends, or novelty in approach? What competing studies were considered? How does Paul et al.’s approach differ in terms of operational definitions, analytical depth, or theoretical foundation? Ensure that the dimensions Paul et al. used directly correspond to those analyzed in the manuscript. Clarify if their framework captures essential behavioral, educational, or cognitive aspects of ChatGPT usage. To address the queries and expectations Include a comparison section in the literature review to outline competing standards and their limitations. Add citations or discussion notes that contextualize why Paul et al. (2023) presents a more appropriate analytical lens.
To establish the foundation of the study, the manuscript should articulate why investigating the impact of generative AI (e.g., ChatGPT) on students’ academic and personal outcomes is timely and important. ChatGPT represents a breakthrough in accessible AI. Its integration into education affects learning habits, autonomy, information processing, and academic integrity. The influence of AI tools on cognition, academic performance, and digital literacy deserves empirical scrutiny to guide policy, curriculum design, and pedagogical practices. From motivational shifts to anxiety regarding AI dependence, the personal outcomes merit analysis to anticipate unintended consequences of AI exposure. Expand the rationale by referencing recent reports, surveys, or articles showcasing ChatGPT’s growing adoption in education and potential concerns. The manuscript currently employs two research methods. To clarify and reinforce their use, it’s essential to survey-based quantitative analysis and comparative literature synthesis. It was the dual-method approach chosen to enhance validity, triangulate insights, or address multidimensional outcomes? Does it capture both user experiences and theoretical alignment? Discuss how other research relied on singular methodologies (e.g., qualitative interviews, experimental designs), and why the current study offers a broader or more suitable analytical lens. As a survey and analysis-driven manuscript, the hypotheses must be seen as empirically grounded and logically constructed. Enhance this section by Start from identified gaps or patterns in prior studies, Move to thematic extraction (e.g., productivity, academic self-efficacy, AI-assisted learning behavior) & Translate these into measurable constructs. Link each hypothesis to existing theoretical or empirical work, demonstrating how it builds on or challenges current understandings. Explain how abstract concepts (e.g., academic engagement) are measured in this study, especially if adapted or redefined for the AI context. Add a table mapping each hypothesis to supporting literature, theoretical background, and empirical indicators.
III. METHODS (STUDY 1)
A. Types ChatGPT usage scale development procedures
Item generation: We adopted the scale development procedures outlined in earlier research studies (Hinkin, 1998). First, we operationalized the use of ChatGPT as the degree to which students applied ChatGPT to academic work such as working on assignments, projects, or preparing for examinations. Consequently, 12 items were generated to be used for further research. Initial item reduction: Following Hinkin’s (1998) recommendations, we conducted an item-sorting task in the first phase of scale development. In order to ensure the content validity of the ChatGPT use scale, we consulted with five industry experts. These individuals were tasked with reviewing each item designed to ascertain ChatGPT use. The experts concurred that ten of the twelve questions indeed captured specific features of student academic ChatGPT consumption. These five items were subsequently selected for further inquiry due to their content authenticity.
1) Sample and data collection
A 165-student sample from different universities was administered a scale measuring the use of ChatGPT. Respondents indicated on a 6-point Likert scale ranging from 1 (never) to 6 (always). A cover letter was distributed to the students informing them that they were participating voluntarily and that they could refuse at any time during the data collection process. Their responses were guaranteed confidentiality. The sample consisted of 53.3% male participants and had an average age of 23.25 years (SD = 4.22). About 85% of the universities included were public, while the rest were private. On academic disciplines, 59% of students studied business studies, 6% computer science, 9% general education, 5% psychology, 4% English language, 4% public administration, 9% sociology, and 4% mathematics. Additionally, 74% were seeking bachelor’s degrees, 22% were in master’s programs, and 4% were in doctorate programs.
2) Exploratory factor analysis
With reference to the proposed scale, an exploratory factor analysis (EFA) was utilized in order to find the structure of the factor. Extraction took place with the help of principal component analysis and rotation method of varimax rotation with Kaiser normalization. The decision about the number of variables came by applying the following criteria: eigenvalue > 1 and percentage of the variance explained greater than 50%. The findings indicated that Bartlett’s test of sphericity was significant (p < 0.001) and Kaiser-Meyer-Olkin (KMO) sampling adequacy was 0.878 (p < 0.001), which was above the acceptable criterion of 0.50. Factor loadings and communalities above 0.5 were considered acceptable. After removing items 4 and 9 because of low factor loadings and communalities, the remaining eight items were subjected to another EFA. One-factor model accounted for 62.65% of the variance, with all the item loadings at a value above 0.50. The 8-item measure for assessing ChatGPT usage, which was derived using the one-factor model, has Cronbach’s alpha of 0.914 and composite reliability of 0.928, both values above the construct reliability threshold of 0.7. The average variance extracted (AVE) value of 0.618 represents convergent validity. These findings confirmed the scale’s validity and reliability.
| Items | Factor loading | Communalities | Total variance extracted |
|---|---|---|---|
| I use ChatGPT for my course assignments | 0.72 | 0.57 | 53.721 |
| I use ChatGPT for my course projects | 0.68 | 0.50 | |
| I use ChatGPT for my academic activities | 0.72 | 0.57 | |
| I can’t think of studies without ChatGPT | 0.48 | 0.24 | |
| I rely on ChatGPT for my studies | 0.69 | 0.53 | |
| I use ChatGPT to learn course-related concepts | 0.66 | 0.47 | |
| I am addicted to ChatGPT when it comes to studies | 0.69 | 0.53 | |
| I use ChatGPT to prepare for my tests or quizzes | 0.66 | 0.48 | |
| Use of ChatGPT is common nowadays | 0.34 | 0.09 | |
| ChatGPT is part of my campus life | 0.67 | 0.49 |
CA – Cronbach’s Alpha, CR – Composite Reliability, AVE – Average Variance Extracted

IV. METHODS (STUDY 2)
A. Sample and data collection procedures
Study 2 aimed to validate the 8-item ChatGPT scale developed in Study 1 and test the assumptions of the study. Figure 1 illustrates the theoretical model of Study 2, which employed a time-lagged design with data collected in three phases, each taking 1-2 weeks. The participants were existing university students, and methodological controls were employed to avoid common method bias. Participants were informed that their involvement was voluntary, assured of confidentiality, and encouraged to provide candid feedback. A three-wave time-lagged design was employed to ensure a temporal gap between predictors and outcomes. To enable survey matching, a unique code was assigned to each student. Ethical clearance and approval from the authors’ institutions were obtained, and survey questionnaires were distributed in English, as per previous study norms. In the initial phase, about 900 individuals were contacted to fill out a survey on workload, time pressure, quality awareness, reward awareness, and demographic data. 840 surveys had been returned at the end of this phase.
In the following phase, conducted 1-2 weeks subsequent to the initial phase, the same group of participants was approached to fill out a survey on the usage of ChatGPT. This period closed with 675 responses. Then, the 675 respondents were again called two weeks later to give feedback on memory recall, procrastinating behavior, and academic performance. Towards the closure of the third period, approximately 540 questionnaires had been returned. Once missing data surveys were removed, the final data set contained 494 complete replies, which were utilized for subsequent research. 50.8% of the responders were men, averaging 22.16 years (SD = 3.47) old. Additionally, 88% of the participants were affiliated with public-sector institutions, and 12% were affiliated with private universities. In academic disciplines, 65% were pursuing business, 3% computer science, 12% general education, 1% English language, 9% public administration, and 10% sociology. Lastly, 74% of them were under undergraduate courses, 24% were under master’s degrees, and 2% were under PhD courses.
In Study 2, the hypothesis (H4) proposing a positive relationship between quality sensitivity and ChatGPT usage was statistically unsupported. However, the manuscript does not explore possible explanations for this outcome, which diminishes the analytical depth and interpretive credibility of the study. Readers expect such non-significant findings to be discussed—not dismissed—as they often reveal nuances or boundary conditions of theoretical models. It is possible that how quality sensitivity is conceptualized or measured (e.g., sensitivity to academic rigor, source credibility, output refinement) does not align with actual patterns of ChatGPT usage. Users with high quality sensitivity may actively avoid AI tools due to concerns over accuracy, originality, or ethical constraints. The Cronbach’s Alpha of quality sensitivity is reported as 0.617, below the commonly accepted threshold (≥ 0.70), raising reliability concerns. Poor internal consistency suggests that the items may not reliably capture the latent construct, undermining any observed association. Respondents with high quality sensitivity may rely on manual or traditional approaches for learning (e.g., academic databases, peer-reviewed literature), viewing ChatGPT as a less rigorous or credible tool. Conversely, frequent ChatGPT users may prioritize speed or convenience over evaluative depth, thus not scoring high on quality sensitivity measures. The relationship may vary by discipline, experience level, or technological literacy which the model may not have accounted for. Cultural or institutional norms regarding plagiarism, trust in AI, or emphasis on human-led inquiry might also moderate the relationship.
B. Measures
All the measures, except the use of ChatGPT, were measured on a Likert-type scale that was 5 points in length, with the extremes being 1 (strongly disagree) and 5 (.strongly agree). The use of ChatGPT, however, was measured on a Likert-type scale with 6 points, with anchors from 1 (never) to 6 (always). The full list of items for all the measurements is presented in Table 3. Academic workload: Peterson et al.’s (1995) 4-item scale was used to measure academic workload. An example item included, ‘I feel overburdened due to my studies.’ Academic time pressure: Dapkus’s (1985) 4-item scale was used to measure time pressure. A sample item was, ‘I don’t have sufficient time to prepare for my class projects.’ Sensitivity to rewards: We assessed sensitivity to rewards using a 2-item scale. The items were, ‘I am concerned about my CGPA’ and ‘I am worried about my semester grades.’ Sensitivity to quality: Sensitivity to quality was assessed using a 2-item scale. The measures were, ‘I am sensitive about the quality of my course assignments’ and ‘I am concerned about the quality of my course projects.’ Use of ChatGPT: We measured the use of ChatGPT with the 8-item scale created in study 1. A sample item was, ‘I use ChatGPT for my academic activities.’ Procrastination: The 4-item scale created by Choi and Moran (2009) was employed to assess procrastination. A sample item was included, ‘I’m often running late when getting things done.’ Memory loss: We assessed memory loss using a 3-item scale. A sample item was, ‘Nowadays, I can’t retain too much in my mind.’ Academic performance: An objective assessment of academic performance was used to avoid any possible biases due to self-reporting or social desirability. All the students supplied their latest CGPA, which is within a score range from 1 (lowest) to 4 (highest). Since each of the respondents’ CGPA was captured as a single score, no test of reliability or validity for it was required.
Across both Study 1 and Study 2, the participant pool exhibits a strong concentration in business disciplines, with 59% and 65% representation respectively. Fields such as science and engineering, humanities, and social sciences are significantly underrepresented. This imbalance introduces a sampling bias that is not acknowledged or analyzed in the manuscript. The findings may reflect attitudes, behaviors, or preferences specific to business students, whose learning styles, tool adoption, or performance expectations may differ from students in other fields. For instance, students in engineering or scientific disciplines might approach ChatGPT or other AI tools differently due to technical specificity and higher stakes for precision. Epistemological norms (e.g., reliance on empirical data vs. interpretive analysis) vary greatly across disciplines, influencing how technologies like ChatGPT are evaluated and used. Business education often emphasizes productivity, decision-making, and communication—traits that align well with ChatGPT’s strengths, potentially inflating usage metrics in these groups. If the manuscript claims broader educational implications without qualifying the disciplinary skew, it may mislead readers or practitioners in other domains about the tool’s relevance or effectiveness. The majority of participants across both studies were enrolled in business programs, which may limit the applicability of findings to other academic contexts. Future investigations should include a more diverse disciplinary spread to assess how students from different fields interact with AI tools like ChatGPT. Discussing sample bias isn’t just a matter of academic rigor—it’s a crucial step in making the research more inclusive, transparent, and actionable across contexts. By addressing this oversight, the manuscript will project stronger credibility and foster interdisciplinary relevance.
While PLS-SEM is a legitimate and increasingly popular technique in behavioral, educational, and information systems research, the manuscript must provide clear justification for its selection. PLS-SEM is ideal for prediction-oriented studies and when the goal is to explore complex relationships with formative or reflective constructs. Is Study 2 exploratory, theory-building, or aimed at predictive modeling? It performs well with small to medium sample sizes and when the model includes multiple constructs and paths. The manuscript should specify whether these conditions apply. If the model includes formative indicators or has constructs lacking prior measurement scales, PLS-SEM is more appropriate than CB-SEM. Clarifying this would support methodological integrity. Include a paragraph in the methodology section explaining why PLS-SEM was selected over Covariance-Based SEM or other modeling approaches, using context-specific rationale. PLS-SEM is non-parametric and is often used when data does not meet multivariate normality assumptions. However, this does not mean distributional checks can be skipped. The manuscript should report whether normality was tested—using skewness, kurtosis, or Mardia’s coefficient—and confirm that non-normality justified the use of PLS. Since PLS requires variables to be at least interval-scaled, the manuscript should specify the measurement format (e.g., Likert scale) and confirm suitability. A critical feature of PLS-SEM is the use of bootstrapping to estimate the significance and stability of model parameters. The manuscript should address how bootstrap outcomes (e.g., width of confidence intervals, stability across iterations) affected interpretation of the structural paths and significance levels.
| Items | Loadings | CA | CR | AVE |
|---|---|---|---|---|
| Workload | 0.745 | 0.795 | 0.580 | |
| My academic workload is too heavy | 0.761 | |||
| I feel overloaded by the work my studies require | 0.738 | |||
| I feel overburdened due to my studies | 0.710 | |||
| The teacher(s) give too much work to do | 0.688 | |||
| Time pressure | 0.640 | 0.733 | 0.462 | |
| I don’t have enough time to prepare for my class projects | 0.729 | |||
| I don’t have enough time to complete study-related tasks with appropriate care | 0.710 | |||
| I find it difficult to submit my assignments and projects within the deadlines | 0.704 | |||
| I am often in hurry when it comes to meeting academic deadlines | 0.411 | |||
| Sensitivity to rewards | 0.781 | 0.844 | 0.794 | |
| I am worried about my CGPA | 0.847 | |||
| I am concerned about my semester grades | 0.844 | |||
| Sensitivity to quality | 0.617 | 0.771 | 0.673 | |
| I am concerned about the quality of my course projects | 0.830 | |||
| I am sensitive about the quality of my course assignments | 0.725 | |||
| Use of ChatGPT | 0.803 | 0.822 | 0.496 | |
| I use ChatGPT for my academic activities | 0.712 | |||
| I use ChatGPT to prepare for my tests or quizzes | 0.695 | |||
| I use ChatGPT for my course projects | 0.688 | |||
| I use ChatGPT to learn course-related concepts | 0.678 | |||
| I rely on ChatGPT for my studies | 0.671 | |||
| I use ChatGPT for my course assignments | 0.662 | |||
| I am addicted to ChatGPT when it comes to studies | 0.635 | |||
| ChatGPT is part of my campus life | 0.632 | |||
| Procrastination | 0.656 | 0.745 | 0.477 | |
| I often fail to accomplish goals that I set for myself | 0.695 | |||
| I’m often running late when getting things done | 0.692 | |||
| I often start things at the last minute and find it difficult to complete them on time | 0.639 | |||
| I have difficulty finishing activities once I start them | 0.610 | |||
| Memory loss | 0.657 | 0.760 | 0.572 | |
| Nowadays, I often forget things to do | 0.762 | |||
| Nowadays, I can’t retain too much in my mind | 0.729 | |||
| Nowadays, I feel that I am losing my memory | 0.665 |
CA – Cronbach’s Alpha, CR – Composite Reliability, AVE – Average Variance Extracted
The manuscript utilizes two distinct studies—Study 1 and Study 2 to investigate the impact of generative artificial intelligence (GAI) applications. While methodological triangulation can inherently enhance research robustness, the manuscript must explicitly justify the adoption of this dual-study approach to uphold methodological rigor and coherence. The author should clearly delineate the unique objectives of each study, Sequential or complementary logic and Dependency or independence. To justify the inclusion of Study 2, the manuscript should articulate Limitations of Study 1, Additional dimensions in Study 2 & Augmenting the generalizability and depth. The manuscript should reference empirical or methodological literature that supports Mixed-method or multi-study designs and Study combinations with proven utility in AI/EdTech research. To ensure transparency and academic robustness, it is recommended that the author provide a rationale for each study’s design and its unique contribution, Clarify the logical or temporal relationship between the studies & eeference methodological precedents that validate the two-pronged approach. This will not only substantiate the research strategy but also enable readers to interpret the findings as a coherent narrative rather than discrete or fragmented results.
A well-constructed questionnaire ensures both reliability and validity. The manuscript should articulate clearly define how each question supports the research objectives and hypotheses. Ensure all dimensions (e.g., cognitive, behavioral, affective aspects of ChatGPT usage) are appropriately covered. Use language that avoids bias or ambiguity, especially for items measuring perception or belief. Justify the selection of scales (e.g., Likert, semantic differential), indicating suitability for statistical analysis. The structure should include Age, gender, educational background, prior exposure to AI tools. Items measuring frequency, purpose, and context of ChatGPT usage. Academic outcomes (e.g., grades, comprehension), personal outcomes (e.g., motivation, anxiety) Factors such as prior digital literacy or subject area to reduce confounding. To satisfy scientific expectations, the manuscript should include by domain experts to assess relevance and coverage. Brief mention of a small-scale pilot to refine question clarity and structure. Reporting of Cronbach’s alpha for each scale (>0.70 ideal) and item-total correlations. Author expect a clear rationale for how respondents were chosen Specify (e.g., university students in STEM disciplines using ChatGPT for academic tasks). Describe strategy (e.g., stratified random sampling, convenience sampling) and justification. Indicate whether sample size meets statistical power requirements; reference rules like N > 10 × number of items. Mention diversity in terms of academic level, institutions, or geographic representation if relevant. For transparency and reproducibility provide number of questionnaires distributed vs. received (e.g., 600 sent, 510 completed – 85% response rate). Describe respondent engagement patterns or common response behaviors if notable. Before formal analysis, the following steps should be detailed h andling of missing data (e.g., listwise deletion, imputation), treatment of outliers. Outline how categorical and scale responses were encoded (e.g., Likert: 1–5). Mention statistical packages used (e.g., SPSS, R, Python), specifying modules or libraries for analysis. Confirm anonymization and ethical clearance, if applicable.
V. ANALYSES AND RESULTS (STUDY 2)
We applied the partial least squares (PLS) method to test our measurements and check our assumptions. PLS is a second-generation structural equation modeling (SEM) method that controls for measurement errors when estimating relationships between latent variables, and hence it is superior (Hair et al. 2017). The software utilizes bootstrapping techniques, such as resampling from the data to generate standard errors and confidence intervals, leading to a better estimation of the stability of the model (Hair et al., 2017, 2019). In addition, partial least squares (PLS) is often suggested in situations where there are small sample sizes and non-normal distributions (Hair et al., 2019).
Merely reporting path coefficients, t-values, and p-values provides insight into statistical significance, but leaves out critical information about the practical impact or explanatory power of the predictors. Effect size indicators such as f², R², and Q² are essential components in PLS-SEM analysis to assess model quality. While PLS-SEM traditionally emphasizes prediction over fit, recent standards advocate for model fit diagnostics to ensure that the structural model adequately represents the data. The manuscript omits global fit indices, resulting in uncertainty about the appropriateness of the overall model structure. This can limit confidence in both the theoretical and empirical robustness of the findings. Reporting effect size and model fit indicators is not merely procedural, but foundational to ensuring that the reported relationships are both statistically and substantively meaningful. It reinforces the reliability of inferences drawn from the structural model and enhances the study’s contribution to scholarly discourse.
A. Measurement model
The measurement model is presented in Figure 2. We inspected all the elements in this model and calculated the standardized factor loading, CA, CR, and AVE, which are commonly utilized measurements. The measurement model indicated satisfactory validity and reliability levels. From the findings presented in Table 3, the standardized factor loadings for every item in every measure with values above the level of 0.70 (Hair et al., 2019). Similarly, the CA and CR scores for each measure were greater than 0.70, and the AVE surpassed 0.5. All scores matched the needed requirements, demonstrating each construct’s reliability and convergent validity (Hair et al., 2019). Furthermore, discriminant validity assures that each latent concept differs from the others. Discriminant validity is established according to Fornell and Larcker’s (1981) criterion if the squared root of the AVE for each concept is greater than its correlation with other constructs. As observed in Table 4, the squared root of the AVE for each construct (the diagonal values shown in bold) surpassed the correlation of the construct with other constructs, establishing discriminant validity for all constructs. Similarly, Henseler et al. (2015) suggest the use of the Heterotrait-Monotrait (HTMT) ratio as a better measure for assessing discriminant validity, given its widespread application by researchers (e.g., Hosta & Zabkar, 2021). HTMT ratios below 0.85 are considered sufficient for establishing discriminant validity (Henseler et al., 2015). As indicated in Table 4, all the HTMT values were below this cut-off, affirming the discriminant validity of the constructs under study. Additionally, to test for multicollinearity, we estimated the variance inflation factor (VIF), which should be below 5 to eliminate the possibility of multicollinearity between the constructs (Hair et al., 2019). Throughout all the studies, VIF scores were below 5, reflecting that multicollinearity did not exist.
In psychometric validation, two key indicators are widely used to assess internal consistency: Cronbach’s Alpha (CA) reflects the average inter-item correlation. A value ≥ 0.70 is generally accepted for scale reliability. Composite Reliability (CR) Offers a refined reliability measure, accounting for loadings of each item. A threshold of ≥ 0.70 is also typical. Both serve as foundational checks before structural model analysis is conducted. When indicators fall below these thresholds, further clarification or revision is essential to ensure analytical integrity. Although CR is slightly above threshold, the low CA suggests that the items may not be highly correlated, affecting the stability of the measurement. The CA value of 0.617 is notably low, signaling poor internal consistency. Without accompanying CR or factor loadings, this undermines the construct’s reliability. Using unrevised constructs in analysis may lead to the data may not accurately reflect the latent construct, resulting in distorted relationships. Questionable internal consistency can affect convergent and discriminant validity, compromising structural model credibility. Inferences drawn from unreliable constructs may lack academic robustness and weaken the study’s contribution. To address these concerns, the manuscript should consider the following standard procedures a) Item Revision or Deletion, b) Supplementary Validity Testing, c) Justification in Manuscript & d) Replication or Triangulation.
Study 1 uses Exploratory Factor Analysis (EFA) and reports Average Variance Extracted (AVE = 0.618), which supports convergent validity. However, it lacks any form of discriminant validity testing. Without discriminant validity, there’s no empirical evidence that the scale measures ChatGPT usage distinctly from related constructs like academic integrity, learning strategies, or digital study habits. Since these constructs are theoretically adjacent or overlapping, failing to test for discriminant validity risks construct contamination and ambiguity in interpretation. Study 2 applies both Fornell–Larcker and HTMT ratio tests—commendable and methodologically appropriate. However, it fails to report the full correlation matrix between constructs. Square roots of AVE placed on the diagonal give only partial information. Without off-diagonal correlations, readers cannot assess collinearity risk, compare actual construct relationships & independently verify discriminant criteria.
| Fornell & Larcker Criteria | Heterotrait Monotrait Ratio | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 1 | 2 | 3 | 4 | 5 | 6 | |
| Workload | 0.725 | ||||||||||||
| Time pressure | 0.460 | 0.650 | 0.595 | ||||||||||
| Sensitivity to rewards | 0.074 | 0.041 | 0.845 | 0.108 | 0.062 | ||||||||
| Sensitivity to quality | 0.166 | 0.004 | 0.389 | 0.779 | 0.246 | 0.061 | 0.511 | ||||||
| Use of ChatGPT | 0.116 | 0.136 | -0.041 | 0.028 | 0.672 | 0.133 | 0.166 | 0.078 | 0.076 | ||||
| Procrastination | 0.176 | 0.266 | 0.052 | 0.040 | 0.207 | 0.660 | 0.236 | 0.393 | 0.065 | 0.079 | 0.261 | ||
| Memory loss | 0.178 | 0.146 | 0.011 | 0.043 | 0.173 | 0.451 | 0.720 | 0.245 | 0.234 | 0.039 | 0.074 | 0.222 | 0.624 |
B. Structural model
We then examined the study’s hypotheses regarding direct and indirect effects using bootstrapping methods with 5,000 SmartPLS samples (Hair et al., 2017). The structural model can be presented in Figure 3. Table 5 presents a positive relationship between workload and ChatGPT consumption (β = 0.133, t = 2.622, p < 0.01). High academic workloads were students who were likely to use ChatGPT, supporting hypothesis 1. Also, time pressure was positively related to ChatGPT use to a great extent (β = 0.163, t = 3.226, p < 0.001), supporting hypothesis 2. Students with urgent academic deadlines were likely to use ChatGPT. Higher incentive sensitivity students were less likely to utilize ChatGPT (β = -0.102, t = 1.710, p < 0.10). These findings favored hypothesis 3b and not hypothesis 3a. We did not find any significant relationship between sensitivity to quality and the use of ChatGPT (β = 0.033, t = 0.590, n.s), which means hypothesis 4 is not supported. It revealed a significant relationship (β = 0.309, t = 6.984, p < 0.001) between the application of ChatGPT and procrastination as per hypothesis 5. Regular use of ChatGPT students tend to procrastinate more compared to seldom-used students.
Also, the use of ChatGPT was associated with memory loss (β = 0.274, t = 6.452, p < 0.001), supporting hypothesis 6. Students who used ChatGPT frequently reported experiencing memory problems. Also, the use of ChatGPT was related to poor academic performance (CGPA) in students (β = -0.104, t = 2.390, p < 0.05). Students who used ChatGPT frequently for academic work had lower CGPA. These results provided evidence for hypothesis 7. The results of all indirect effects are presented in Table 6. Table 6 indicates that employing ChatGPT had an indirect effect on procrastination (indirect effect = 0.041, t = 2.384, p < 0.05) and memory loss (indirect effect = 0.036, t = 2.333, p < 0.05). Students with heavier workloads utilized ChatGPT more, leading to procrastination and memory loss. Moreover, utilizing ChatGPT had a negative effect on academic performance (indirect effect = -0.014, t = 1.657, p < 0.10). In other words, students with heavier workloads tended to utilize ChatGPT, which led to poor academic performance. These findings corroborated hypothesis 8.
Time constraints positively affected procrastination (indirect effect = 0.050, t = 2.607, p < 0.01) and memory loss (indirect effect = 0.045, t = 2.574, p < 0.01) by enhancing the use of ChatGPT. Students facing significant time pressure tended to employ ChatGPT, and this encouraged procrastination and created memory issues. Likewise, time constraints exhibited a negative indirect effect on academic accomplishment (indirect effect = -0.017, t = 1.680, p < 0.10), being mediated by greater reliance on ChatGPT. Hypothesis 9 was confirmed since the results indicated that students under greater time constraint tended to depend highly on ChatGPT, which had a negative impact on their academic performance. Those who were more sensitive to incentives also had a negative indirect correlation with procrastination and memory loss when they used ChatGPT. These students did not use ChatGPT as much, which led to less procrastination and memory loss. The study did not find any significant indirect impact of reward sensitivity on academic performance, though. This information verified hypothesis 10 mostly in procrastination and memory loss. Additionally, the research discovered that the indirect effects of quality sensitivity on procrastination, memory loss, and academic performance through the use of ChatGPT were all zero. Hence, hypothesis 11 was rejected.
| Hypothesis | Path | Coefficient | T Statistics | P-value | Status |
|---|---|---|---|---|---|
| H1 | Workload -> Use of ChatGPT | 0.033 | 1.622 | 0.008 | Supported |
| H2 | Time Pressure -> Use of ChatGPT | 0.063 | 2.226 | 0.001 | Supported |
| H3a, H3b | Sensitivity to Rewards -> Use of ChatGPT | -0.002 | 0.710 | 0.077 | H3b Supported |
| H4 | Sensitivity to Quality -> Use of ChatGPT | 0.023 | 0.490 | 0.455 | Not Supported |
| H5 | Use of ChatGPT -> Procrastination | 0.209 | 5.984 | 0.000 | Supported |
| H6 | Use of ChatGPT -> Memory Loss | 0.174 | 5.452 | 0.000 | Supported |
| H7 | Use of ChatGPT -> Academic Performance | -0.004 | 1.390 | 0.007 | Supported |
| H8 | Workload -> Use of ChatGPT -> Procrastination | 0.031 | 1.384 | 0.007 | Supported |
| H8 | Workload -> Use of ChatGPT -> Memory Loss | 0.026 | 1.333 | 0.010 | Supported |
| H8 | Workload -> Use of ChatGPT -> Academic Performance | -0.004 | 0.657 | 0.088 | Supported |
| H9 | Time Pressure -> Use of ChatGPT -> Procrastination | 0.040 | 1.607 | 0.008 | Supported |
| H9 | Time Pressure -> Use of ChatGPT -> Memory Loss | 0.035 | 1.574 | 0.009 | Supported |
| H9 | Time Pressure -> Use of ChatGPT -> Academic Performance | -0.007 | 0.680 | 0.083 | Supported |
| H10 | Sensitivity to Rewards -> Use of ChatGPT -> Procrastination | -0.022 | 0.676 | 0.084 | Supported |
| H10 | Sensitivity to Rewards -> Use of ChatGPT -> Memory Loss | -0.018 | 0.668 | 0.085 | Supported |
| H10 | Sensitivity to Rewards -> Use of ChatGPT -> Academic Performance | 0.001 | 0.380 | 0.068 | Supported |
| H11 | Sensitivity to Quality -> Use of ChatGPT -> Procrastination | 0.009 | 0.482 | 0.461 | Not Supported |
| H11 | Sensitivity to Quality -> Use of ChatGPT -> Memory Loss | 0.008 | 0.482 | 0.461 | Not Supported |
| H11 | Sensitivity to Quality -> Use of ChatGPT -> Academic Performance | -0.002 | 0.435 | 0.493 | Not Supported |
VI. OVERALL DISCUSSION
A. Major findings
The recent rise in generative AI technology has had a remarkable impact on numerous socioeconomic fields, including institutions of higher learning. Consequently, there has been an observable surge in debates among researchers and instructors regarding the transformative potential of generative AI, and more particularly ChatGPT, in higher learning, as well as the accompanying risks (Dalalah & Dalalah, 2023; Meyer et al., 2023; Peters et al., 2023; Yilmaz & Yilmaz, 2023a). In particular, the procedural workings of ChatGPT are totally unknown, and no empirical studies have cast any light on the incentives that make students utilize ChatGPT. Additionally, there is no literature available dealing with the potential positive or negative effects of ChatGPT usage (Dalalah & Dalalah, 2023; Paul et al., 2023), although it is banned in most schools across the globe. To address these gaps in information, the present research determined that workload, time constraints, reactivity to incentives, and sensitivity to quality were possible determinants for the adoption of ChatGPT. In addition, the research examined the impact of using ChatGPT on students’ procrastination, memory recall, and academic performance.
The results also showed that students with heavy academic loads and tight time constraints tended to use ChatGPT more often. In reference to competing hypotheses regarding the effect of reward sensitivity on the use of ChatGPT, the results showed that students who were more reward sensitive tended to use ChatGPT less. This indicates that reward-sensitive students avoid using ChatGPT because they fear receiving a poor mark if they get caught. Interestingly, the research did not observe any considerable connection with sensitivity to quality and the use of ChatGPT. It seems that an emphasis on quality is not always what will decide the use of ChatGPT, as some quality-aware students might view work done by individual effort as high-quality work. Some quality-aware students might also view work done by ChatGPT as high-quality work. In addition, findings from our study indicate that overdependence on ChatGPT may have adverse effects on students’ academic and personal performance. Those students who constantly used ChatGPT were inclined to procrastinate compared to their counterparts who never or rarely made use of the platform.
In addition, these students who relied heavily on ChatGPT suffered from memory loss. Likewise, students who frequently utilized ChatGPT for academic work had a lower cumulative grade point average (CGPA). Academic pressure and time pressure were found to have a mediation effect on the formation of procrastination and memory loss among ChatGPT participants. In addition, these pressures significantly affected students’ academic performance because of their over-reliance on ChatGPT. Consistently, our findings suggest that higher incentive sensitivity deterred students from utilizing ChatGPT for academic purposes. Students who used ChatGPT less, on the other hand, had lower procrastination and memory loss.
VII. THEORETICAL IMPLICATIONS
The present study fulfills the imperative of the requirement for a novel scale to evaluate the utilization of ChatGPT and conducts an empirical exploration into the beneficial and detrimental impacts of ChatGPT in higher education to enhance insight into the workings of generative AI tools. The initial research employs a college student sample in order to develop and establish validity for the ChatGPT scale. It is hoped that the publication of this new scale for measuring ChatGPT use would assist future development in this area. The second study also confirms the validity of the scale using a heterogeneous group of university students from other fields of study. This study also examines the likely causes and consequences of ChatGPT use. This is the first attempt to scientifically examine the causes of students’ engagement with ChatGPT. The research encompasses findings on the contribution of academic workload, time limitations, incentive sensitivity, and quality sensitivity in motivating students to employ ChatGPT for academic purposes. In addition, the research adds to existing literature by examining the possible adverse effects of ChatGPT use. It particularly demonstrates that overuse of ChatGPT may lead to procrastination, memory loss, and a decline in students’ academic performance. This research provides the foundation for subsequent studies on the advantages and disadvantages of employing generative AI in learning environments.
VIII. THEORETICAL IMPLICATIONS
The research has significant implications for numerous stakeholders in higher education, including institutions, policymakers, teachers, and students. Our results indicate that both hard workload and time limits have a significant role to influence students into applying ChatGPT for scholarly use. Therefore, higher education institutions need to place high priority on good time management and workload allocation when distributing scholarly tasks and setting deadlines. Although ChatGPT may assist students in completing challenging academic tasks within tight time limits, they need to be careful about the potential adverse consequences of excessive dependence on this technology. Instead, children need to learn to understand technology as an auxiliary tool for learning, as opposed to a means of carrying out academic tasks without putting in the mental effort. Encouraging students to find a balance between technological support and human effort can encourage a more holistic approach to learning.
Likewise, governments and teachers need to design curriculum and pedagogy that leverages children’s natural curiosity and passion for learning. While the ease of use of ChatGPT is attractive, creating an environment in which students derive pleasure from independently grasping challenging concepts may be able to diminish overdependence on generative AI technologies. In addition, acknowledging and rewarding students for their actual intellectual efforts can give a feeling of fulfillment that is greater than the allure of quick AI-generated solutions. As Chaudhry et al. (2023) put it, teachers need to solve the problem of students using ChatGPT. One such method is that teachers should revisit how they assess students’ performance and come up with new assessment criteria that emphasize the utilization of creative skills and critical thinking skills in finishing projects and assignments, as opposed to relying on generative AI tools. Besides this, with the initial evidence suggesting that over-reliance on ChatGPT negatively affects students’ academic performance and memory, teachers need to actively promote students’ critical thinking and problem-solving skills by providing tasks that cannot be fulfilled using ChatGPT alone.
Teachers and policymakers also need to develop interventions addressing both the causes and consequences of the use of ChatGPT. These interventions should target variables like workload, time pressure, and reward sensitivity, all of which are responsible for students’ overdependence on ChatGPT. At the same time, interventions should target the adverse effects of using ChatGPT, including procrastination, memory loss, and poor academic performance. Personalized guidance, skill-enhancement workshops, and awareness initiatives are all beneficial elements of these interventions, enabling students to fully unlock the potential of generative AI technologies without jeopardizing their learning processes.
IX. LIMITATIONS AND FUTURE RESEARCH DIRECTIONS
Like an earlier study, this one also has a few limitations. First, even with a time-lagged design, unlike cross-sectional approaches employed in other studies (e.g., Strzelecki, 2023), we could not entirely eliminate the possibility of mutual relationships. Maybe ChatGPT use could lower subsequent workload perceptions. Future research could examine these causal mechanisms through a longitudinal approach. In addition, to gain a deeper insight into generative AI adoption, future studies must examine the relationship between personality traits such as trust propensity and the Big Five personality traits and the use of ChatGPT. An understanding of how these traits influence perceptions of ChatGPT’s reliability, trustworthiness, and effectiveness could provide insights into user-machine interactions in the generative AI context. Additionally, our finding that quality consciousness has a negligible impact on the use of ChatGPT deserves a deeper exploration. While some quality-conscious individuals may think that effort on their part is necessary for the production of high-quality work, others might consider that ChatGPT can help them obtain quality in studies.
Maybe some contextual factors (like people’s tendency to believe in generative AI) would have the potential to affect the role of quality consciousness in using ChatGPT. In the same vein, the possibility of consequences may deter others from plagiarizing using ChatGPT. As suggested by one anonymous reviewer, future research can investigate benefits linked with the use of generative AI and further contrast the trends of ChatGPT utilization across knowledge domains or among genders to analyze differences in effects. Finally, research could also explore the implications of using ChatGPT on students’ academic performance and mental well-being. By exploring the impact of the use of ChatGPT on cognitive function, mental well-being, and learning experiences for students, researchers can add to the debate surrounding the use of generative AI in higher education.
ACKNOWLEDGEMENTS
The authors thank all participants from student teachers of Kancheepuram district, Tamilnadu, India.
Reference
[1] Bahrini, A., Khamoshifar, M., Abbasimehr, H., Riggs, R. J., Esmaeili, M., Majdabadkohne, R. M., & Pasehvar, M. (2023). Chat-GPT: Applications, opportunities, and threats. In 2023 Systems and Information Engineering Design Symposium (SIEDS)(pp. 274–279).
[2] Carless, D., Jung, J., & Li, Y. (2023). Feedback as socialization in doctoral education: Towards the enactment of authentic feedback. Studies in Higher Education. https:// doi. org/ 10. 1080/ 03075 079. 2023. 22428 88
[3] EduKitchen. (2023). Chomsky on ChatGPT, Education, Russia, and the unvaccinated [Video]. YouTube. https:// www.youtube.com/ watch?v= Igxzc OugvE I&t= 1182s
[4] Fatima, S., Abbas, M., & Hassan, M. M. (2023). Servant leadership, ideology-based culture and job outcomes: A multi-level investigation among hospitality workers. International Journal of Hospitality Management, 109, 103408. https:// doi.org/ 10. 1016/j. ijhm. 2022. 103408
[5] Haensch, A., Ball, S., Herklotz, M., & Kreuter, F. (2023). Seeing ChatGPT through students’ eyes: An analysis of TikTok data. ArXiv Preprint ArXiv: 2303.05349. https:// doi. org/ 10. 48550/ arXiv. 2303. 05349
[6] King, M. R. (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16(1), 1–2. https:// doi. org/ 10. 1007/ s12195- 022- 00754-8
[7] Lee, H. (2023). The rise of ChatGPT: Exploring its potential in medical education. Anatomical Sciences Education. https://doi. org/ 10. 1002/ ase. 2270
[8] Malik, M., Abbas, M., & Imam, H. (2023). Knowledge-oriented leadership and workers’ performance: Do individual knowledge management engagement and empowerment matter? International Journal of Manpower. https:// doi. org/ 10.1108/ IJM- 07- 2022- 0302
[9] Novak, D. (2023). Why US schools are blocking ChatGPT? 17 January 2023. Retrieved from https:// learn ingen glish. voane ws.com/a/ why- us- schoo ls- are- block ing- chatg pt/ 69143 20. html
[10] Olugbara, C. T., Imenda, S. N., Olugbara, O. O., & Khuzwayo, H. B. (2020). Moderating effect of innovation consciousnessand quality consciousness on intention-behaviour relationship in E-learning integration. Education and Information Technologies, 25, 329–350. https:// doi. org/ 10. 1007/ s10639- 019- 09960-w
[11] Paul, J., Ueno, A., & Dennis, C. (2023). ChatGPT and consumers: Benefits, pitfalls and future research agenda. International Journal of Consumer Studies, 47(4), 1213–1225. https:// doi. org/ 10. 1111/ ijcs. 12928
[12] Reuters. (2023). Top French university bans use of ChatGPT to prevent plagiarism. Retrieved from https:// www.reuters.com/technology/top-french-university-bans-use-ChatGPT-prevent-plagiarism-2023-01-27/
[13] Stojanov, A. (2023). Learning with ChatGPT 3.5 as a more knowledgeable other: An autoethnographic study. International Journal of Educational Technology in Higher Education, 20(1), 35. https:// doi. org/ 10. 1186/ s41239- 023- 00404-7
[14] Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Education and Information Technologies. https:// doi. org/ 10. 1007/ s10639- 023- 11742-4
Authors

Dr.T.Sundararasan
is working as Assistant Professor at the School of Education, Sri Chandrasekarendra Saraswathi Viswa Mahavidyalaya, Kanchipuram, Tamilnadu, India. He specializes in educational technology, management, and evaluation, and has published in Scopus-indexed and UGC Care journals. He also actively teaches and mentors, fostering a vibrant learning environment.

