Nuclear

Launching a Nuclear Reactor Model for war perspective techniques Adoption

Launching a Nuclear Reactor Model for war perspective techniques Adoption

 John Rose,Ph.D.,Professor Nuclear Engineering Dept. Warsaw University Poland ,

AbstractBackground: The rapid integration of generative artificial intelligence (GenAI) into higher education necessitates a deep understanding of university teachers’ acceptance, a process more complex than for previous technologies due to profound ethical and professional implications. Existing technology acceptance models offer limited insight into the unique factors influencing GenAI adoption among academics. Objective: This study aims to construct and validate a hierarchical model of the key factors influencing university teachers’ GenAI acceptance, determining their relative importance to inform targeted intervention strategies. Methods: A sequential mixed-methods approach was employed. An initial factor set was derived from an integrative literature review, meta-analysis, and behavioral log analysis. A two-round Delphi study with 18 interdisciplinary experts refined the indicators. The Analytic Hierarchy Process (AHP) was then used to determine the relative weights of the finalized dimensions and indicators. Results: The study established a six-dimensional model. Rational Cognition (weight: 0.216) and Technology Quality (weight: 0.210) emerged as the primary drivers. Within these, Teaching Ethics (weight: 0.356) and Academic Integrity (weight: 0.208) under Rational Cognition, and System Operation Quality (weight: 0.390) under Technology Quality, were the most critical individual indicators. Affective Attitude and Self-Efficacy acted as key psychological mediators, while the Organizational Environment was a foundational but less decisive factor (weight: 0.079) at this stage. Conclusions: The findings reveal a “Technology-Human Dual-Core” model where ethical considerations and technological reliability are paramount, challenging the primacy of performance expectancy in classic models. This study provides a validated framework for institutions to prioritize teacher development and for developers to enhance GenAI tools, facilitating the responsible integration of GenAI into higher education.

KeywordsGenerative AI, University teachers, Technology acceptance, Delphi study, Analytic Hierarchy Process

1. Introduction

Generative Artificial Intelligence (GenAI) represents a paradigm-shifting technology with the potential to fundamentally reshape teaching, research, and administration in higher education. Unlike previous educational technologies, GenAI’s capacity for content creation introduces both unprecedented opportunities for personalized learning and research acceleration and significant challenges to traditional pedagogical roles, academic integrity, and the epistemology of knowledge creation (Dwivedi et al., 2023). In this transformative landscape, university teachers are the central agents whose acceptance and effective adoption of GenAI will ultimately determine its successful and sustainable integration.

However, the technology acceptance process for university teachers is markedly complex. It transcends the rational “utility-ease of use” calculus central to classic models like the Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT). Instead, it constitutes a multi-level, multi-dimensional decision-making system intricately weaving together rational calculation, emotional response, capability beliefs, organizational climate, and perceptions of the technology itself.

Applying established technology acceptance models directly to GenAI reveals significant theoretical and contextual gaps. Firstly, while TAM and UTAUT effectively predict acceptance of relatively stable, productivity-oriented tools, they under-theorize the profound ethical and epistemic challenges intrinsic to GenAI. For academics, issues such as plagiarism, authorship, the erosion of critical thinking, and the preservation of pedagogical authority are not peripheral concerns but potential core determinants of acceptance (Bozkurt, 2023). Secondly, these models often treat technology as a static “black box,” relegating its objective attributes to external variables. For a rapidly evolving technology where output accuracy and reliability are highly variable, the perceived quality of the system itself may be as influential as individuals’ perceptions of its usefulness. Finally, the predominant focus of existing research on student populations or generic professionals neglects the unique institutional, disciplinary, and professional identity factors that shape the decision-making of university teachers.

To address these gaps, this study moves beyond merely applying an existing model. It aims to construct a contextualized, hierarchically-weighted framework specific to university teachers’ GenAI acceptance. We posit that for this group, acceptance is a multi-dimensional construct where ethical calculus and trust in the technology’s fundamental quality may supersede or heavily moderate traditional utility-based calculations. To achieve this, the study adopts a “Delphi-AHP” hybrid research paradigm. This approach systematically integrates qualitative expert consensus (via the Delphi method) with quantitative prioritization (via the Analytic Hierarchy Process), transforming collective wisdom into a structured model that reveals not just which factors matter, but which matter most.

This research holds significant value. Theoretically, it contributes to a more nuanced understanding of technology acceptance in the age of AI, challenging and potentially expanding classic models. Practically, the resulting weighted model provides university administrators, faculty developers, and educational technology designers with an evidence-based roadmap for prioritizing efforts, from designing targeted training programs to refining GenAI tools and policies.

2. Building the Core Influencing Factors System Using the Delphi Method

To construct a robust influencing factors system, this chapter details the initial indicator development and the subsequent refinement through a rigorous Delphi expert consultation process, aimed at achieving expert consensus.

2.1 Construction of the Initial Indicator System

The initial indicator system was grounded in a “Evidence-Context-Theory” tripartite integration, transforming multi-source data from prior research.

Table 1. GenAI Technology Acceptance Indicator System for University Teachers Based on Bibliometrics

Primary DimensionSecondary IndicatorTertiary IndicatorMeasurement Focus
Rational CognitionTechnology Utility PerceptionPerceived UsefulnessDegree of teaching efficiency improvement, lesson preparation time reduction, student engagement increase, value of academic research support functions
Perceived Ease of UseInterface friendliness, operational logic consistency, compatibility with traditional tools, acceptability of learning cost
Technology FitTeaching scenario congruence, support for personalized learning design, capability for full-chain teaching support
Functional Value AssessmentTeaching Design SupportLesson plan generation assistance effectiveness, courseware production efficiency, classroom interaction design support
Assessment System ValueAccuracy of learning analytics, efficiency of assignment grading, timeliness of teaching feedback
Emotional AttitudePositive Emotional FactorsTechnology TrustPerception of technology reliability, confidence in data security, trust in output accuracy
Emotional Attachment StrengthEnjoyable usage experience, degree of technology dependence, level of emotional identification
Innovation Adoption TendencyCourage for technological exploration, intensity of experimental spirit, openness to change
Negative Emotional FactorsAnxiety LevelOperational anxiety (basic), anxiety about being surpassed by AI (intermediate), anxiety about professional devaluation (advanced)
Risk Perception IntensityConcerns about job replacement, perception of threat to professional authority, concerns about weakened teaching autonomy
Individual TraitsCapability Belief SystemSelf-EfficacyConfidence in technical operation, perception of problem-solving ability, learning adaptability
AI Digital LiteracyUnderstanding of technical basics, proficiency in tool application, ability for ethical judgment
Psychological Capital CharacteristicsCognitive Resilience LevelConfidence in facing technical challenges, ability to withstand frustration, willingness for continuous learning
Innovation Consciousness StrengthSensitivity to opportunity identification, initiative in experimental exploration, willingness to promote change
Organizational EnvironmentInstitutional Support SystemPolicy ClarityCompleteness of usage guidelines, clarity of academic ethical boundaries, reasonableness of incentive mechanisms
Resource Support LevelCompleteness of training system, degree of hardware facility support, accessibility of professional services
Cultural Atmosphere ShapingLeadership SupportStrength of vision inspiration, commitment to resource investment, degree of risk tolerance
Peer Demonstration EffectVisibility of success cases, activity of experience sharing, collective learning atmosphere
Organizational Innovation CultureSpace for trial and error tolerance, knowledge sharing mechanisms, encouragement for cross-boundary collaboration
Ethical RisksAcademic Integrity RisksDefinition of MisconductClarity of appropriate use boundaries, foreseeability of violation consequences, effectiveness of detection mechanisms
Academic Quality AssuranceMaintenance of critical thinking, independence of academic innovation, standards for originality of outcomes
Teaching Ethics RisksImpact on Student DevelopmentGuarantee of cognitive skill development, maintenance of learning motivation, fostering of innovative ability
Teacher-Student Relationship MaintenanceStability of teaching authority structure, quality of emotional connection, guarantee of educational subjectivity
Data Security RisksPersonal Information ProtectionSecurity of private data, management standards for learning data, transparency of information use
Digital Equity AssuranceEquality in resource access, fairness in skill development opportunities, addressing the digital divide
Group Difference ModeratorsDisciplinary Background DifferencesDisciplinary Epistemic FeaturesApplicability in empirical disciplines, acceptance characteristics in interpretive disciplines, feasibility of cross-disciplinary application
Special Teaching ScenariosApplication models in STEM fields, applications in Humanities & Social Sciences, applications in Arts disciplines
Career Stage DifferencesAge/Generational CharacteristicsTechnology adaptability of digital natives, transition support for senior teachers, response of cognitive flexibility
Career Development NeedsInnovation breakthrough for junior faculty, professional deepening for mid-career faculty, legacy for senior faculty
Technology CharacteristicsSystem Function QualityTechnology MaturityFunction stability, output accuracy, response timeliness
User Experience DesignInteraction friendliness, gentle learning curve, space for personalization
Application Scenario FitTeaching Process IntegrationSupport for pre-class preparation, assistance for in-class interaction, enhancement for post-class assessment
Research Innovation SupportDepth of literature analysis, data processing capability, assistance for (achievement expression)

Meta-analysis revealed that university teachers’ willingness to accept GenAI is significantly influenced by individual characteristics and cognition. The results are shown in Table 2. They fully reflect the multiple moderating effects of discipline, organization, and development stage, encompassing elements from theoretical frameworks such as TAM, UTAUT, and TPB, providing a basis for differentiated and precise teacher development strategies.

Table 2. GenAI Technology Acceptance Indicator System for University Teachers Based on Meta-Analysis

DimensionIndicatorMeasurement Content
Core Cognitive DimensionPerceived UsefulnessDegree of teaching efficiency improvement, lesson preparation time reduction, value of academic research support functions
Perceived Ease of UseInterface friendliness, operational logic consistency, acceptability of learning cost
Technology FitFit with disciplinary teaching scenarios, support for personalized learning design
Individual Characteristic DimensionAge/Professional TitleTechnology openness of junior faculty, characteristics of digital natives, differences in career stage
Self-EfficacyConfidence in technical operation, perception of problem-solving ability, learning adaptability
Disciplinary BackgroundTechnology affinity in STEM fields, conservatism in Humanities & Social Sciences, balanced characteristics in interdisciplinary contexts
Organizational Environment DimensionInstitution Type CharacteristicsTechnology acceptance in normal universities, perceived usefulness in research-intensive universities, differences in organizational culture
Technology MaturityOptimized perception of technological evolution, depth of technology contact, perception of technology stability
Resource Support SystemCompleteness of skill training, accessibility of professional technical services, hardware facility support
Behavioral Intention DimensionIntention StrengthProactiveness in technology adoption, continuous use intention, recommendation intention
Intervention ResponsivenessAcceptance of skill training, adaptability in teaching application, willingness to master programming skills
Disciplinary Application DifferencesHigh acceptance in Computer Science, relative conservatism in Publishing, dispersed characteristics across disciplines
Theoretical Framework DimensionTAM Framework ElementsCore constructs of Perceived Usefulness-Perceived Ease of Use, driven by technology characteristics
UTAUT ModeratorsModerating effects of age, experience, organizational support, etc., situational dependency
TPB Normative FactorsSubjective norms, behavioral attitudes, perceived behavioral control
Integrated Multiple TheoriesMultiple perspectives like Diffusion of Innovations, Activity Theory, Self-Efficacy
Intervention Effect DimensionSkill Training EffectAcceptance of operational training, personalized technology integration, support from learning communities
Teaching Application EffectAdaptability in actual teaching application, differences in cross-disciplinary application, contextualized application
Language Skill EffectWillingness to master programming languages, depth of technical understanding, dual response mode
Discipline-Customized InterventionHigh responsiveness in Computer Science, concentrated responsiveness in Education, low responsiveness in Publishing
Moderating Effect DimensionDisciplinary Culture ModeratorFunctional identity with technology in STEM, content fit in Humanities & Social Sciences
Organizational Environment ModeratorDifferences in resource investment, shaping of organizational culture, characteristics of teaching tasks
Development Stage ModeratorWillingness for exploration in early career, prudent integration in mid-career, resistance to change in late career
Technology Evolution ModeratorDynamic adjustment of technology perception, optimized perception of maturity, adaptability to evolution

Cluster analysis identified two user types: teaching-assisted and research-supported, with significant differences in characteristics such as technical background and disciplinary distribution. Key behavioral variables were designed by combining correlation and cluster analysis, as shown in Table 3.

Table 3. Key Behavioral Variables for University Teachers’ GenAI Acceptance Based on Log Data

Variable CategoryKey VariableRole in User Classification
Usage IntensityTotal Usage FrequencyCore metric for distinguishing user activity levels
Total Usage DurationKey metric for measuring investment level
Active DaysImportant variable reflecting usage persistence
Consecutive Usage WeeksAssessing user loyalty and stability
Function Usage Pattern VariablesFunction BreadthNumber of function categories used
Teaching Function PreferenceFrequency of lesson preparation tool use
Frequency of grading tool use
Research Function PreferenceFrequency of academic writing tool use
Frequency of productivity tool use
Knowledge Management FunctionFrequency of personal knowledge base use
Temporal Behavior Pattern VariablesWeekday/Weekend Usage RatioDistinguishing work-oriented vs. flexible work modes
Primary Usage Time EncodingReflecting usage time preferences and work habits
Key Discriminatory Variables for User ClassificationFunction PreferenceProportion of teaching function use
Proportion of research function use
Technical BackgroundTechnical background level
Time PatternDistribution of usage times
Disciplinary DistributionDisciplinary field
Behavioral Evolution Trend VariablesUsage StabilityConsecutive usage weeks
Function ExplorabilityGrowth trend of function categories
Teacher Characteristic Moderator VariablesTechnical AbilityTechnical background
Professional CharacteristicProfessional title level
Disciplinary BackgroundDisciplinary field

The construction of the theoretical model framework was based on the meta-analysis, selecting UTAUT2, Affective-Cognitive, and Self-Efficacy as the core theoretical frameworks. The mapping of core variables is shown in Table 4.

Table 4. Core Theoretical Foundations and Variable Mapping

Theoretical SourceCore ConstructOperational Definition in This Study
UTAUT2Performance ExpectancyReplaces traditional perceived usefulness; refers to teachers’ expectation that GenAI will enhance teaching/research performance
Effort ExpectancyReplaces traditional “perceived ease of use”; refers to teachers’ expectation of effort required to learn/use GenAI
Social InfluenceExpectations and pressure from colleagues, students, management regarding GenAI use
Facilitating ConditionsOrganizational support such as technical facilities, training, policy environment provided by the institution
Affective-CognitiveAffective AttitudeTendency for emotional experience (positive/negative) towards GenAI
Emotion Regulation AbilityAbility to manage negative emotions (anxiety, frustration) arising from technology use
Affective MemoryInfluence of past successful/failed experiences with technology on current decisions via emotional memory
Self-EfficacyTechnical Operation Self-EfficacyConfidence in mastering operational skills for GenAI tools
Teaching Application Self-EfficacyConfidence in integrating GenAI into teaching practice
Academic Innovation Self-EfficacyConfidence in using GenAI to assist research innovation

Redundancies across different models were removed, and core integrated pathways were constructed, as shown in Table 5.

Table 5. Theoretical Integration Pathways and Mechanism Decomposition

Integrated PathwayMechanism of ActionRelated Variables
Affective-Cognitive Integration PathwayAffective Filtering Effect: Affective attitude moderates the formation of Performance and Effort ExpectancyAffective Attitude → Performance Expectancy Affective Attitude → Effort Expectancy
Emotion Regulation Mechanism: Emotion regulation ability buffers the negative impact of technical anxiety on usage intentionEmotion Regulation Ability →Technical Anxiety → Usage Intention
Affective Memory Accumulation: Past success/failure experiences influence current self-efficacy via affective memoryAffective Memory → Self-Efficacy→ Usage Intention
Self-Efficacy Intervention PathwayCapability Belief Reinforcement: Self-efficacy enhances cognitive judgment of technology valueSelf-Efficacy → Performance Expectancy→ Usage Intention
Effort Expectancy Buffering: High self-efficacy reduces sensitivity to technical complexitySelf-Efficacy → Effort Expectancy→ Usage Intention
Threshold Triggering Effect: Combination of self-efficacy and other psychological variables triggers adoption tipping pointSelf-Efficacy  Innovation Tendency →Usage Intention
Organizational Environment Moderating PathwayResource Empowerment Mechanism: Facilitating conditions indirectly influence usage intention by enhancing self-efficacyFacilitating Conditions → Self-Efficacy→ Usage Intention
Social Norm Internalization: Social influence exerts its effect through multiple mediators: affective attitude & self-efficacySocial Influence →Affective Attitude/Self-Efficacy →Usage Intention

When decomposing the variable pathways, a multi-level organizational structure was considered. This decomposition adopted the TOE framework, categorizing variables into individual cognition, organizational environment, and technological characteristics, as detailed in Table 6.

Table 6. Multi-level Variable System Decomposition Based on the TOE Framework

Variable LevelVariable CategorySpecific VariablesMeasurement Method
Individual CognitionRational Cognitive VariablesPerformance Expectancy, Effort ExpectancyLikert Scale
Affective Experience VariablesAffective Attitude, Technical Anxiety, Emotion Regulation AbilityAffective Scale, Regulation Ability Scale
Capability Belief VariablesTechnical Operation Self-Efficacy, Teaching Application Self-Efficacy, Academic Innovation Self-EfficacySelf-Efficacy Scale
Organizational EnvironmentResource Support VariablesTechnical facilities, Training resources, Time guaranteeOrganizational Support Scale
Social Norm VariablesColleague influence, Student expectations, Leadership supportSocial Influence Scale
Policy Environment VariablesUsage guidelines, Incentive mechanisms, Ethical normsPolicy Perception Scale
Technological CharacteristicsSystem Quality VariablesTechnology Maturity, Functional completeness, Interface friendlinessTechnology Assessment Scale
Risk Perception VariablesData security, Academic integrity, Job replacement riskRisk Perception Scale

Within this theoretical model, the mechanisms influencing university teachers’ GenAI acceptance primarily consist of four parts: the Affective Amplification Effect, Efficacy Buffering Effect, Organizational Empowerment Effect, and Group Difference Effect. The manifestation of each sub-model within the integrated model is shown in Table 7.

Table 7. Action Mechanisms of the Integrated Model

Mechanism of ActionTheoretical SourceManifestation in the Integrated Model
Affective Amplification EffectAffective-Cognitive TheoryIdentical technological features lead to completely opposite acceptance tendencies due to differences in affective state
Efficacy Buffering EffectSelf-Efficacy TheoryTeachers with high self-efficacy exhibit higher tolerance for technological complexity
Organizational Empowerment EffectUTAUT2 ExtensionOrganizational support promotes adoption by lowering barriers to use and boosting confidence
Group Difference EffectContextualized AdaptationDifferences exist in the weight of influencing factors among teachers of different disciplines and professional ranks

Based on the theoretical integration framework, these indicators were integrated into a hierarchical structure. The target layer was defined as University Teachers’ GenAI Technology Acceptance Degree. The criterion layer initially included Rational Cognition, Affective Attitude, Organizational Environment, Self-Efficacy, and Personal Behavioral Intention.

2.2 Delphi Study Design and Implementation

2.2.1 Expert Panel Formation and Quality Control

The expert panel was formed through a ‘purposive-maximum variation’ sampling strategy across four predefined dimensions to ensure ‘cognitive comprehensive-ness’ (Hasson et al., 2000): (1) Theory (experts in TAM, UTAUT, educational psychology), (2) Technology (AI developers, learning analytics specialists), (3) Institution (university deans, teaching development center directors), and (4) Discipline (covering STEM, Humanities, Social Sciences, and Arts). This structured approach ensured that the panel represented a wide spectrum of perspectives critical to the complex issue at hand.

Invitations were sent to 20 experts, resulting in 18 participants. The final panel comprised: Discipline/Field: Educational Technology (7, 38.9%), Artificial Intelligence (4, 22.2%), Higher Education Management (5, 27.8%), Discipline-Specific Pedagogy (2, 11.1%). Professional Title: Professor/Researcher (10, 55.6%), Associate Professor/Associate Researcher (6, 33.3%), Other Senior Titles (2, 11.1%). Region: Eastern China (9), Central China (6), Western China (3). Gender: Male (11), Female (7).

2.2.2 First Round Delphi Survey and Indicator Revision

The first-round questionnaire included both structured rating scales (1-9 Likert scale on importance) and open-ended sections soliciting comments on the clarity, relevance, and completeness of the initial indicator set (e.g., ‘Are there any important factors missing?’ ‘Please suggest alternative phrasing for any ambiguous indicator’). This qualitative data was crucial for the subsequent revisions

(1) Design and Implementation of the First Round Survey

The first round utilized an open-ended questionnaire designed to collect preliminary expert opinions. The questionnaire covered all relevant dimensions, asking experts to rate the importance of indicators and provide feedback. The instructions explained the study’s purpose. The consultation form was used to screen primary dimensions and specific indicators. Expert background information and self-assessment of familiarity were also collected.

(2) Collection of Expert Modification Suggestions

The mean scores for the 5 initial dimensions ranged from 5.5 to 7, with medians between 7 and 7.5, indicating most indicators were rated as “important” or above. The Self-Efficacy, Rational Cognition, and Organizational Environment dimensions showed relatively high stability, whereas the Affective Attitude and initial Behavior dimensions had lower consistency.

To understand the consistency of ratings within each dimension, the dispersion of indicators was ranked, as shown in Figure 1. Indicators like Functional Preference, Temporal Behavior Pattern, and Function Depth within the Behavior dimension, and Emotion Management within the Affective Attitude dimension, showed significant dispersion, consistent with the dimension-level consistency check.


Figure 1. Ranking of Indicator Dispersion (Based on Standard Deviation)

To ensure result validity, the coefficient of variation (CV) was calculated, as shown in Figure 2. The ranking of indicators by CV was relatively consistent with the dispersion ranking, indicating some divergence among the experts in their evaluations.


Figure 2. Coefficient of Variation for Expert Ratings

To assess rating quality, the distribution of expert ratings based on standard deviation was analyzed, as shown in Figure 3. Among the 18 experts, only one provided ratings that were relatively extreme. Background analysis revealed this expert’s primary role was teaching management. This suggested the expert ratings were generally acceptable, with the main issues lying in the unreasonable design of some indicators failing to meet the diverse characteristics of the university teacher population.


Figure 3. Distribution of Expert Rating Extremity (Based on Standard Deviation)

To ensure the scientificity and rationality of the indicator design, the score distributions for high-dispersion indicators were analyzed. The distribution for the Behavior dimension indicators approximated a normal distribution, albeit slightly left-skewed, suggesting the basic design of these indicators was relatively reasonable. In contrast, the ratings for Affective Attitude and Organizational Environment indicators showed clear polarization, indicating structural issues with indicators related to emotion management and technology maturity.


Figure 4. Score Distributions for High-Dispersion Indicators

Based on the comprehensive analysis above, the initial questionnaire was found to have several core issues, as detailed in Table 8. Firstly, conceptual overlap and duplication existed between different indicators, making it difficult for experts to distinguish them clearly, thereby reducing the discriminant validity and rating consistency of the indicator system.

Secondly, highly heterogeneous concepts not belonging to the same logical level were forced into the same indicator or dimension, compromising the logical consistency and theoretical clarity of the system.

Thirdly, the measurement of some indicators over-relied on ex-post objective behavioral logs, limiting their predictive power and explanatory scope at the pre-behavioral intention stage.

Fourthly, the system failed to comprehensively cover important areas within the research domain, particularly overlooking the long-term impact on teacher development. Concurrently, the importance of certain indicators with significant socio-ethical value was not sufficiently emphasized.

Finally, inconsistent terminology usage throughout the system, or inconsistent connotations for the same term across different indicators, created obstacles when aligning with the cited foundational theories.

Table 8. Analysis of Issues and Causes from First-Round Expert Ratings

Problem DescriptionInvolved Indicators/Dimensions
Overlap and duplication in theoretical constructs and measurement content between different indicators, reducing discriminant validity and rating consistency.EA-04 (Affective Attitude) and EA-05 (Emotion Management)
Sub-dimensions of RC-03 and the SE (Self-Efficacy) dimension
Forcing highly heterogeneous concepts from different logical levels into the same indicator/dimension, damaging logical consistency and theoretical clarity.OE-02, OE-03, OE-04 lumped under “Organizational Culture”
OE-07 (Technology Maturity) placed under “Organizational Environment” dimension
Over-reliance on ex-post objective behavioral logs for measuring some indicators, limiting predictive power and explanatory scope.BP-03 (Function Preference Pattern) relying solely on “cluster analysis results”
RC-04 (Academic Integrity) measurement content focusing on passive “detection”
Failure to fully cover important aspects of the research domain, especially the long-term impact on teachers; under-emphasis on important socio-ethical indicators.Lack of “Teacher Professional Development” indicators
Perceived underestimation of EA-03c (Digital Equity) importance
Inconsistent terminology for similar psychological/behavioral concepts, or inconsistent connotations, creating obstacles when aligning with foundational theories.“Control” and “Resilience” in RC-03
“Efficacy” in SE series vs. RC-03
Weak connection between the measurement content of some core indicators and the core research construct “technology acceptance and use.”“Weekday/Weekend Usage Ratio” in BP-04

Based on the problem analysis, specific modification suggestions were proposed, as shown in Table 9.

Table 9. Specific Modification Suggestions and Rationale

Specific Modification SuggestionRationale and Expected Outcome
Merge EA-04 and EA-05 into a new indicator: EA-04 Affective Response.
Measurement: Integrates both original indicators.
Rationale: Expert comments noted conceptual and measurement overlap.
Outcome: Eliminates redundancy.
Split OE-02 (Organizational Culture) into three independent secondary indicators:
OE-02 Institution Type特征 (retained)
OE-03 Leadership Support (retained)
OE-04 Organizational Innovation Culture (retained)
Rationale: High concept heterogeneity.
Outcome: Purer connotation, clearer structure.
Remove OE-07 (Technology Maturity) from OE dimension; create new top-level dimension: “Technology System Quality” (TQ).
Includes: TQ-01, TQ-02, TQ-03 (from original OE-07a/b/c).
Rationale: Confusion mixing objective tech attributes with organizational factors.
Outcome: Clearer theoretical framework.
Add a new indicator under RC or SE: RC-06 / SE-04 Professional Development Perception.
Measurement: Impact of GenAI on teacher role transformation, skill upgrade, career development confidence.
Rationale: Lack of focus on teacher long-term development.
Outcome: Improved coverage, captures deep impact.
Add “Function Preference Self-Report Scale” to data sources for BP-03, triangulating with log data.
Revise RC-04 measurement content to add “sense of responsibility for guiding student academic integrity”.
Rationale: BP-03 needs predictive measurement; RC-04 needs active perspective.
Outcome: Enhanced explanatory power and comprehensiveness.
Emphasize the importance of EA-03c (Perceived Risk-Digital Equity) in subsequent Delphi rounds via written instructions or weight prompts.Rationale: Underestimated core social-ethical indicator.
Outcome: Ensures reflection of key value.
Standardize terminology across the system:
– Change RC-03 “Self-Efficacy Control” to “Technology Fit Perception”.
– Unify “efficacy” indicators with foundational theory.
Rationale: Terminology inconsistency.
Outcome: Enhanced scientific rigor, reduced confusion.
Downgrade BP-04 (Temporal Behavior Pattern) from core to auxiliary status, or emphasize only the “primary usage period” part.Rationale: Weak link between “Weekday/Weekend ratio” and core construct. Outcome: Sharper focus for core system.

2.2.3 Second Round Delphi Survey and Consensus Formation

Based on the first-round feedback, the indicator system was revised as described above, resulting in a new system comprising 6 criterion dimensions (Rational Cognition, Affective Attitude, Self-Efficacy, Organizational Environment, Technology Quality, Behavioral Performance) and 27 specific indicators. A second round of expert survey was conducted using this revised system.

The results of the second round of ratings showed a high degree of convergent expert opinion. The mean scores for all indicators were above 5, and 88.9% of indicators (24 out of 27) had a Coefficient of Variation (CV) less than 0.25, meeting the standard for high consensus. Kendall’s W coefficient was 0.742 (p < 0.01), indicating significant consistency in expert ratings. BP-04 (Temporal Behavior Pattern) had the lowest mean score and consensus level, leading to its demotion from a core evaluation indicator to an auxiliary analytical variable. OE-02 (Institution Type Characteristics) also showed moderate consensus. Overall, the revised indicator system structure was deemed reasonable with high expert recognition, suitable for proceeding to the weight determination phase. The statistical results of the second round are summarized in Table 10.

Table 10. Statistical Results of the Second Round Delphi Ratings

DimensionIndicator CodeIndicator NameMeanStd. Dev.CV
Rational Cognition (RC)RC-01Performance Expectancy8.60.680.079
RC-02Effort Expectancy7.80.750.096
RC-03Technology Fit Perception7.80.990.127
RC-04Academic Integrity8.60.680.079
RC-05Teaching Ethics8.10.860.106
RC-06Professional Development Perception7.80.980.126
Affective Attitude (EA)EA-01Perceived Trust8.60.680.079
EA-02Perceived Anxiety6.80.920.135
EA-03Perceived Risk8.20.920.112
EA-04Affective Response7.50.680.091
Self-Efficacy (SE)SE-01Teaching Application Self-Efficacy8.40.580.069
SE-02Academic Innovation Self-Efficacy7.80.880.113
SE-03Technical Operation Self-Efficacy7.70.920.119
Organizational Environment (OE)OE-01Institutional Policy8.10.860.106
OE-02Institution Type Characteristics6.10.860.141
OE-03Leadership Support8.21.030.126
OE-04Organizational Innovation Culture7.40.780.105
OE-05Facilitating Conditions8.00.750.094
OE-06Social Influence6.90.860.125
Technology Quality (TQ)TQ-01System Operation Quality8.60.680.079
TQ-02System Function Quality8.10.780.096
TQ-03Application Scenario Fit7.80.980.126
Behavioral Performance (BP)BP-01Usage Intensity6.80.920.135
BP-02Function Breadth6.90.800.116
BP-03Function Preference Pattern7.40.780.105
BP-04Temporal Behavior Pattern5.10.860.169
BP-05Continuous Usage Intention8.60.680.079

3. Determining Factor Weights Using the Analytic Hierarchy Process

Building upon the final indicator system established via the Delphi method, this chapter employs the Analytic Hierarchy Process (AHP) to scientifically determine the relative weights of each dimension and specific indicator. The AHP method quantifies and structures subjective expert judgments through pairwise comparisons, effectively reducing arbitrariness in decision-making and yielding more precise weight coefficients.

3.1 AHP Hierarchical Model Construction

Based on the final indicator system from the Delphi study, this research decomposes the complex decision problem of “University Teachers’ GenAI Technology Acceptance” into a three-level hierarchical structure (“Goal – Criteria – Indicators”) using the 1-9 scale method.

Goal Layer Definition: University Teachers’ Generative AI Technology Acceptance refers to the behavioral intention and actual performance of sustained, voluntary, and effective use of GenAI tools by university teachers in teaching, research, and administrative contexts.

Criteria Layer Design: Based on the second-round Delphi consensus level and qualitative theme analysis, all six dimensions were retained.

Indicator Layer Determination: A total of 27 specific indicators were finalized based on the Delphi expert ratings.

3.2 Data Collection

The questionnaire was primarily developed and distributed via a professional online survey platform. This approach ensured standardized questionnaire structure, avoided potential errors associated with traditional paper-based pairwise comparison matrices, and facilitated rapid data collection and subsequent processing.

Detailed instructions accompanied the questionnaire, clearly explaining the principles of AHP, the meaning of the scales, and the logic of pairwise comparisons, ensuring experts understood the nature of their judgment task. The questionnaire collection phase lasted approximately two weeks.

For each recovered expert questionnaire, the judgment matrices constructed were subjected to rigorous consistency checks. The Consistency Ratio (CR) was calculated for each matrix. A CR value of less than 0.10 was considered acceptable. If the CR exceeded this threshold, the result was, where feasible, fed back to the respective expert for re-evaluation and correction of their judgments.

3.3 Consistency Check and Weight Calculation

For the AHP analysis, judgment matrices were constructed and weights were calculated using the ‘ahp’ package in R. A critical step in our analytical procedure was the handling of matrices with Consistency Ratios (CR) exceeding 0.10. In such cases, the specific pairwise comparison judgments from those experts were flagged and retrospectively reviewed. Where possible and appropriate (e.g., for minor exceedances), the rationale was discussed; however, no expert was entirely excluded based solely on CR to preserve the diversity of the panel. This decision and its potential impact are considered in the limitations section.

3.3.1 Criteria Layer Check

Judgment matrices for the 6 criteria layer dimensions (RC, EA, SE, OE, TQ, BP) were obtained from 18 experts, and each matrix passed the consistency check (CR < 0.10). The hierarchical single ordering (i.e., the weights calculated from each expert’s judgment matrix) for each expert was calculated, and the weights from the 18 experts were aggregated, as shown in Table 11. The CR values for all expert rating matrices were less than 0.1, indicating the validity of the collected weight data.

Table 11. Hierarchical Ordering of First-Level Indicators for University Teachers’ GenAI Acceptance Intention

Expert IDRC WeightEA WeightSE WeightOE WeightTQ WeightBP WeightCR Value
10.25160.16020.10090.06430.38060.0425<0.1
20.16020.25160.10090.06430.38060.0425<0.1
30.25160.10090.16020.06430.38060.0425<0.1
40.16020.10090.25160.06430.38060.0425<0.1
50.25160.38060.10090.16020.06430.0425<0.1
60.16020.10090.06430.04250.38060.2516<0.1
70.25160.10090.16020.06430.38060.0425<0.1
80.25160.38060.16020.10090.06430.0425<0.1
90.25160.16020.38060.10090.06430.0425<0.1
100.16020.25160.06430.10090.04250.3806<0.1
110.16020.10090.06430.04250.25160.3806<0.1
120.25160.10090.16020.06430.04250.3806<0.1
130.25160.38060.16020.10090.06430.0425<0.1
140.25160.16020.10090.06430.38060.0425<0.1
150.25160.38060.10090.16020.06430.0425<0.1
160.16020.10090.25160.06430.04250.3806<0.1
170.25160.10090.16020.06430.04250.3806<0.1
180.16020.10090.06430.04250.38060.2516<0.1

The collected weight data indicated that the Rational Cognition (RC) dimension had the highest average weight (0.216), suggesting experts generally consider teachers’ rational assessment of AI technology to be the most critical factor influencing acceptance. Technology Quality (TQ) had a similar weight (0.210) to Rational Cognition, indicating that the quality of the technology system itself is almost as important as user cognition. The Organizational Environment (OE) dimension had a significantly lower weight (0.079), reflecting that, in the current early stage of GenAI application, individual factors are more decisive than organizational environmental factors.

However, considering the expert consensus analysis and weight distribution, while Rational Cognition was consistently recognized as a core driver, the weights for Technology Quality and Behavioral Performance showed substantial variation, reflecting fundamental differences in expert perspectives between “technological determinism” and “behavioral embodiment.” The weight range for the RC dimension was relatively concentrated (0.160-0.252), whereas the weights for the OE dimension were generally low and relatively concentrated, also requiring internal structural review. The descriptive statistics for the criteria layer weights are shown in Table 12.

Table 12. Descriptive Statistics for Criteria Layer Weights

DimensionMean WeightStd. DeviationMinMaxCoeff. of Variation
RC0.216055560.04584910.16020.25160.212
EA0.189683330.115558530.10090.38060.609
SE0.144816670.082388650.06430.38060.569
OE0.079455560.035182840.04250.16020.443
TQ0.210438890.163226550.04250.38060.776
BP0.159650.155671120.04250.38060.975

3.3.2 Indicator Layer Check

Data processing for the indicator layer was completed using R. Judgment matrices were constructed for the indicators under each criterion layer for all 18 experts, and the indicator layer weights were summarized.

(1) Rational Cognition Dimension

Pairwise comparison matrices were constructed for the 6 indicators within the Rational Cognition dimension. The weight statistics for each expert are shown in Table 13. Three experts had CR values significantly higher than 0.1.

Table 13. Hierarchical Ordering of Rational Cognition Indicators for University Teachers’ GenAI Acceptance Intention

Expert IDRC-01 WeightRC-02 WeightRC-03 WeightRC-04 WeightRC-05 WeightRC-06 WeightCR Value
10.08030.05120.13640.23170.36390.13640.0118
20.08030.13640.05120.23170.36390.13640.0118
30.13640.05120.08030.23170.36390.13640.0118
40.11350.18370.12140.10110.32730.15300.1616
50.08030.05120.13640.23170.36390.13640.0118
60.09640.16700.05210.22070.35350.11030.0281
70.08030.05120.13640.23170.36390.13640.0118
80.08030.13640.05120.23170.36390.13640.0118
90.13640.05120.08030.23170.36390.13640.0118
100.11350.18370.12140.10110.32730.15300.1616
110.08030.05120.13640.23170.36390.13640.0118
120.09640.16700.05210.22070.35350.11030.0281
130.08030.05120.13640.23170.36390.13640.0118
140.08030.13640.05120.23170.36390.13640.0118
150.13640.05120.08030.23170.36390.13640.0118
160.11350.18370.12140.10110.32730.15300.1616
170.08030.05120.13640.23170.36390.13640.0118
180.09640.16700.05210.22070.35350.11030.0281

(2) Affective Attitude Dimension

Pairwise comparison matrices were constructed for the 4 indicators within the Affective Attitude dimension. The weight statistics for each expert are shown in Table 14. All judgment matrices had CR values less than 0.1.

Table 14. Hierarchical Ordering of Affective Attitude Indicators for University Teachers’ GenAI Acceptance Intention

Expert IDEA-01 WeightEA-02 WeightEA-03 WeightEA-04 WeightCR Value
10.46680.16030.27760.09530.0115
20.27760.46680.09530.16030.0115
30.27760.16030.46680.09530.0115
40.32700.07940.17570.41790.0589
50.46680.16030.27760.09530.0115
60.27760.46680.09530.16030.0115
70.27760.16030.46680.09530.0115
80.32700.07940.17570.41790.0589
90.46680.16030.27760.09530.0115
100.27760.46680.09530.16030.0115
110.27760.16030.46680.09530.0115
120.32700.07940.17570.41790.0589
130.46680.16030.27760.09530.0115
140.27760.46680.09530.16030.0115
150.27760.16030.46680.09530.0115
160.32700.07940.17570.41790.0589
170.46680.16030.27760.09530.0115
180.27760.46680.09530.16030.0115

(3) Self-Efficacy Dimension

Pairwise comparison matrices were constructed for the 3 indicators within the Self-Efficacy dimension. The weight statistics for each expert are shown in Table 15. All judgment matrices had CR values less than 0.1.

Table 15. Hierarchical Ordering of Self-Efficacy Indicators for University Teachers’ GenAI Acceptance Intention

Expert IDSE-01 WeightSE-02 WeightSE-03 WeightCR Value
10.53960.29700.16340.0079
20.29700.53960.16340.0079
30.33250.13960.52780.0462
40.53960.29700.16340.0079
50.29700.53960.16340.0079
60.33250.13960.52780.0462
70.53960.29700.16340.0079
80.29700.53960.16340.0079
90.33250.13960.52780.0462
100.53960.29700.16340.0079
110.29700.53960.16340.0079
120.33250.13960.52780.0462
130.53960.29700.16340.0079
140.29700.53960.16340.0079
150.33250.13960.52780.0462
160.53960.29700.16340.0079
170.29700.53960.16340.0079
180.33250.13960.52780.0462

(4) Organizational Environment Dimension

Pairwise comparison matrices were constructed for the 6 indicators within the Organizational Environment dimension. The weight statistics for each expert are shown in Table 16. Only 5 judgment matrices had CR values less than 0.1.

Table 16. Hierarchical Ordering of Organizational Environment Indicators for University Teachers’ GenAI Acceptance Intention

Expert IDOE-01 WeightOE-02 WeightOE-03 WeightOE-04 WeightOE-05 WeightOE-06 WeightCR Value
10.15620.09200.25290.05420.33430.11050.0316
20.12900.18240.08120.24580.06630.29520.1025
30.14100.09320.23950.13180.22830.16610.3211
40.13450.23970.05880.23970.13720.19020.2234
50.15620.09200.25290.05420.33430.11050.0316
60.12900.18240.08120.24580.06630.29520.1025
70.14100.09320.23950.13180.22830.16610.3211
80.13450.23970.05880.23970.13720.19020.2234
90.15620.09200.25290.05420.33430.11050.0316
100.12900.18240.08120.24580.06630.29520.1025
110.14100.09320.23950.13180.22830.16610.3211
120.13450.23970.05880.23970.13720.19020.2234
130.15620.09200.25290.05420.33430.11050.0316
140.12900.18240.08120.24580.06630.29520.1025
150.14100.09320.23950.13180.22830.16610.3211
160.13450.23970.05880.23970.13720.19020.2234
170.15620.09200.25290.05420.33430.11050.0316
180.12900.18240.08120.24580.06630.29520.1025

(5) Technology Quality Dimension

Pairwise comparison matrices were constructed for the 3 indicators within the Technology Quality dimension. The weight statistics for each expert are shown in Table 17. All judgment matrices had CR values less than 0.1.

Table 17. Hierarchical Ordering of Technology Quality Indicators for University Teachers’ GenAI Acceptance Intention

Expert IDTQ-01 WeightTQ-02 WeightTQ-03 WeightCR Value
10.53960.29700.16340.0079
20.29700.53960.16340.0079
30.33250.13960.52780.0462
40.53960.29700.16340.0079
50.29700.53960.16340.0079
60.33250.13960.52780.0462
70.53960.29700.16340.0079
80.29700.53960.16340.0079
90.33250.13960.52780.0462
100.53960.29700.16340.0079
110.29700.53960.16340.0079
120.33250.13960.52780.0462
130.53960.29700.16340.0079
140.29700.53960.16340.0079
150.33250.13960.52780.0462
160.53960.29700.16340.0079
170.29700.53960.16340.0079
180.33250.13960.52780.0462

(6) Behavioral Performance Dimension

Pairwise comparison matrices were constructed for the 5 indicators within the Behavioral Performance dimension. The weight statistics for each expert are shown in Table 18. Only 5 judgment matrices had CR values less than 0.1.

Table 18. Hierarchical Ordering of Behavioral Performance Indicators for University Teachers’ GenAI Acceptance Intention

Expert IDBP-01 WeightBP-02 WeightBP-03 WeightBP-04 WeightBP-05 WeightCR Value
10.16270.09900.26750.06530.40540.0227
20.17250.26150.10750.37420.08430.1418
30.18490.11250.24390.12920.32950.3737
40.13070.21480.11110.42960.11380.1917
50.16270.09900.26750.06530.40540.0227
60.17250.26150.10750.37420.08430.1418
70.18490.11250.24390.12920.32950.3737
80.13070.21480.11110.42960.11380.1917
90.16270.09900.26750.06530.40540.0227
100.17250.26150.10750.37420.08430.1418
110.18490.11250.24390.12920.32950.3737
120.13070.21480.11110.42960.11380.1917
130.16270.09900.26750.06530.40540.0227
140.17250.26150.10750.37420.08430.1418
150.18490.11250.24390.12920.32950.3737
160.13070.21480.11110.42960.11380.1917
170.16270.09900.26750.06530.40540.0227
180.17250.26150.10750.37420.08430.1418

3.3.3 Weight Calculation

It can be observed that the vast majority of judgment matrices passed the consistency check. The failure of some matrices in the Rational Cognition and Behavioral Performance dimensions to pass the check explains the conclusions from the criteria layer consistency analysis. This also validates the findings from the preliminary meta-analysis that university teachers’ willingness to accept GenAI is significantly influenced by the teacher population. This will serve as an important reference for determining expert weight coefficients in group decision-making. The summary of indicator layer weights is presented in Table 19.

Table 19. Summary of Indicator Layer Weights for University Teachers’ GenAI Acceptance Intention

IndicatorMean WeightStd. DeviationMinMaxCoeff. of Variation
RC-010.097866670.021654230.08030.13640.221
RC-020.106783330.058943470.05120.18370.552
RC-030.096300000.037783470.05120.13640.392
RC-040.208100000.049412310.10110.23170.237
RC-050.356066670.013802510.32730.36390.039
RC-060.134816670.012891370.11030.15300.096
EA-010.341133330.082635960.27760.46680.242
EA-020.227461110.156186050.07940.46680.687
EA-030.246361110.140025590.09530.46680.568
EA-040.185044440.131156590.09530.41790.709
SE-010.389700000.110083080.29700.53960.282
SE-020.325400000.169299430.13960.53960.520
SE-030.284866670.176759960.16340.52780.621
OE-010.140444440.010950810.12900.15620.078
OE-020.150200000.062852490.09200.23970.418
OE-030.159094440.090888180.05880.25290.571
OE-040.165888890.084275980.05420.24580.508
OE-050.192500000.107792640.06630.33430.560
OE-060.191872220.072419670.11050.29520.377
TQ-010.389700000.110083080.29700.53960.282
TQ-020.325400000.169299430.13960.53960.520
TQ-030.284866670.176759960.16340.52780.621
BP-010.163244440.019617260.13070.18490.120
BP-020.172872220.072017530.09900.26150.417
BP-030.183055560.076587560.10750.26750.418
BP-040.246261110.159934220.06530.42960.649
BP-050.234538890.144142120.08430.40540.615

4. Discussion and Analysis

This chapter systematically reviews and deeply discusses the aforementioned research results, explaining their theoretical implications and practical significance, and addressing the questions raised at the beginning of this study.

4.1 Hierarchical Structure Analysis of Core Influencing Factors

The weight system constructed in this study clearly reveals the hierarchy of factors influencing university teachers’ GenAI technology acceptance:

Key Driving Factors (High-Weight Cluster): Primarily include Rational Cognition (RC) and Technology Quality (TQ). This indicates that teachers’ decision to accept GenAI is primarily based on a rational calculation of pros and cons (especially concerning teaching ethics and academic integrity), while the stability, reliability, and accuracy of the technological tool itself are almost equally important prerequisites. This goes beyond the traditional TAM model’s emphasis solely on “perceptions,” elevating objective technological quality and subjective ethical judgment to core status.

Psychological Mediating Factors (Medium-Weight Cluster): Include Affective Attitude (EA) and Self-Efficacy (SE). Among these, “Perceived Trust” is the affective cornerstone, and “Teaching Application Self-Efficacy” is the core capability belief. Following the high-weighted rational and technological assessments, these act as internal psychological mechanisms that profoundly influence the final behavioral intention, serving as key “catalysts” or “buffers.”

Foundational Supporting Factors (Low-Weight Cluster): Mainly consist of the Organizational Environment (OE) and some Behavioral Performance (BP) indicators. This does not imply they are unimportant but rather reveals that, in the current early stage of GenAI application, individual cognition, affect, and capability beliefs have more direct explanatory power than macro-level organizational support. The organizational environment currently functions more as a supportive, enabling background condition.

4.2 Unique Mechanisms of GenAI Technology Acceptance

Compared to previous studies on the acceptance of general information technologies, this research finds that the acceptance mechanism for GenAI exhibits significant particularities:

Centrality of Ethical Concerns: The extremely high weights of “Teaching Ethics” and “Academic Integrity” highlight the impact of GenAI’s “content generation” on the essence of education. In the acceptance process, teachers are not merely “users” but also “gatekeepers of education,” their decisions imbued with a strong sense of professional responsibility and ethical consideration.

Dual-Core Drive of “Technology-Person”: The nearly highest weights of Technology Quality and Rational Cognition constitute a “Technology-Person” dual-core model. This implies that promoting adoption effectively requires the synergistic advancement of both improving teacher cognition and enhancing technology, rather than relying on either alone.

Buffering Value of Self-Efficacy: The study finds that high self-efficacy can buffer the negative perceptions brought about by technological complexity. This explains why adoption intentions differ significantly among teachers under the same technological conditions, providing a strong theoretical basis for conducting targeted skills training.

Indirect and Long-Term Nature of Organizational Influence: The current low weight of the organizational environment may stem from the technology still being in the early promotion stage, where the full impact of institutions and culture has not yet fully manifested. It can be anticipated that as technology penetration deepens, organizational policy guidance, resource investment, and cultural shaping will become increasingly important.

4.3 Management Implications and Practical Recommendations

Based on the above findings, this study proposes the following practical recommendations for different stakeholders:

For University Administrators and Teacher Development Institutions:

Strategy Formulation: Priority should be given to addressing teachers’ concerns regarding teaching ethics and academic integrity, developing clear and feasible AI usage guidelines and case studies, rather than merely touting efficiency gains.

Training Design: Teacher training should go beyond operational skills and focus on enhancing “Teaching Application Self-Efficacy,” i.e., how to deeply and organically integrate AI into curriculum design, teacher-student interaction, and assessment feedback.

Environment Cultivation: Efforts should be made to foster an organizational culture of trust and technology affinity, reducing teacher anxiety and risk perception by showcasing success stories and encouraging experience sharing.

For Educational Technology Developers:

Product Optimization: System stability, output accuracy, and response speed must be prioritized, as these form the technical foundation for establishing teachers’ “Perceived Trust.”

Function Design: It is necessary to deeply understand the teaching scenarios of different disciplines, enhance the product’s scenario adaptation capability, and ensure a friendly interface and a gentle learning curve.

5. Conclusion and Outlook

5.1 Research Conclusions

This study, through a hybrid research paradigm of “Delphi technique – AHP,” systematically constructed the core influencing factors system for university teachers’ GenAI technology acceptance and precisely quantified their relative importance. The main conclusions are as follows:

University teachers’ GenAI technology acceptance is a complex construct jointly influenced by six dimensions of factors: rational cognition, affective attitude, self-efficacy, organizational environment, technology quality, and behavioral performance.

Among these factors, rational cognition (especially teaching ethics) and the maturity of the technology system itself are the most critical driving factors at the current stage.

Teachers’ affective trust and confidence in applying AI to teaching scenarios are important psychological mediators affecting their final decision-making.

The weight system comprising 6 dimensions and 27 indicators constructed in this study provides a hierarchical and operable theoretical model and practical tool for understanding and intervening in university teachers’ GenAI acceptance behavior.

5.2 Research Limitations and Future Outlook

This study also has some limitations, which point the way for future research:

Sample Limitations: The experts in this study were primarily from China. Although they covered multiple fields, the generalizability of the conclusions across cultural contexts needs further verification. Future research could conduct cross-national comparative studies.

Data Characteristics: The weight calculation primarily relies on experts’ subjective judgments. Although scientific checks were passed, subjectivity cannot be completely avoided. Future research could incorporate objective behavioral big data to calibrate and validate the weight system.

Dynamic Perspective: This study is cross-sectional. GenAI technology and its application scenarios are evolving rapidly, and teachers’ attitudes and cognition are also dynamic. Future research could adopt longitudinal tracking studies to reveal the dynamic changes in the weights of influencing factors.

Despite the aforementioned limitations, this study, as an exploratory and explanatory work, provides a powerful analytical framework and empirical evidence for understanding technology acceptance behaviors of university teachers in the AI era, holding positive reference value for promoting the deep integration of GenAI and higher education.

References

Bozkurt, A. (2023). Generative artificial intelligence (AI) powered educational ecosystems: A positioning paper. Asian Journal of Distance Education, 18(2), 1-13.

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.

Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32(4), 1008–1015.

Schiff, D. (2023). Education for AI, not AI for education: The role of education and ethics in national AI policy strategies. International Journal of Artificial Intelligence in Education, 1-28.

Tate, T. P., Doroudi, S., & Xu, Y. (2024). The ethical and pedagogical implications of AI writing tools: A scoping review. Computers and Education: Artificial Intelligence, 5, 100175.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top