Full Program »
Information System Cognitive Bias Classifications and Fairness In Machine Learning: Systematic Review Using Large Language Models
The objective of this systematic review is to (1) gather all relevant previous works that attempt to classify known human-introduced cognitive biases and their bias reduction methods as it exists in Machine Learning (ML) in each of the three phases of the ML process – PRE-processing, the gathering of data; IN-processing, the model generation; and POST-processing, the results dissemination, and their bias reduction methods; (2) use a Large Language Model (LLM) to aid in classification of results; (3) providing a novel model for future systematic literature reviews. This work further seeks to identify the cognitive bias and methods of reduction within all phases of ML. PRISMA statement methodologies were employed to prepare this systematic review. Following these guidelines, electronic peer-reviewed sources were performed, refined, and documented producing 2107 results which were then refined to 19 works that covered the breadth of our research subject. These results showcase human-centric bias classification groupings and their mitigation methodologies identified by location within the ML process. Furthermore, the usage of a LLM proved to be an effective methodology to summarize the results of the systematic review and provided a functional methodology for performing future reviews. Two novel artifacts were introduced, (1) the ALiS framework for using LLMs to aid in the development of a Systematic Literature Review, (2) a conceptual framework for researching information systems cognitive biases and their reduction methods throughout all the phases of ML.