Full Program »
Artificial Intelligence: Ethical Concerns, Trust, and Risk
Recent public access to generative AI using large language models has quickly increased experimentation with AI in various activities. The rapid adoption of AI, however, can create ethical concerns and impact users' perceived AI trust and AI risk. This paper aims to determine the ethical variables that are influential in describing users' perceived trust and risk in using AI. We developed an instrument with five constructs. They are 1) Ethical Concerns: Workplace, 2) Ethical Concerns: Mastery/fairness, 3) Ethical Concerns: Social/Behavior, 4) Perceived AI Trust, and 5) Perceived AI Risk. We administered the instrument to 160 undergraduate college students who were studying in the fields of Information Technology, Computer Science, and Business. Multiple regression analysis was used to create two models. The first model aimed to identify the predictor variables (AI ethical concerns: Workplace, mastery/fairness, and Social/Behavior) that most significantly influence users' perceived AI trust. The second model identified the predictor variables (AI ethical concerns: Workplace, mastery/fairness, and Social/Behavior) that most significantly influence users' perceived AI risk. The results indicated that all three predictor variables, i.e., ethical concerns: workplace, ethical concerns: mastery/fairness, and ethical concerns: social/behavior significantly contributed to both users' perceived AI trust and users" perceived AI risk. Implications of the findings are discussed.