The adoption of artificial intelligence (AI) in retail has significantly transformed the industry, enabling more personalized services and efficient operations. However, the rapid implementation of AI technologies raises ethical concerns, particularly regarding consumer privacy and fairness. This study aims to analyze the ethical challenges of AI applications in retail, explore ways retailers can implement AI technologies ethically while remaining competitive, and provide recommendations on ethical AI practices. A descriptive survey design was used to collect data from 300 respondents across major e-commerce platforms. Data were analyzed using descriptive statistics, including percentages and mean scores. Findings shows a high level of concerns among consumers regarding the amount of personal data collected by AI-driven retail applications, with many expressing a lack of trust in how their data is managed. Also, fairness is another major issue, as a majority believe AI systems do not treat consumers equally, raising concerns about algorithmic bias. It was also found that AI can enhance business competitiveness and efficiency without compromising ethical principles, such as data privacy and fairness. Data privacy and transparency were highlighted as critical areas where retailers need to focus their efforts, indicating a strong demand for stricter data protection protocols and ongoing scrutiny of AI systems. The study concludes that retailers must prioritize transparency, fairness, and data protection when deploying AI systems. The study recommends ensuring transparency in AI processes, conducting regular audits to address biases, incorporating consumer feedback in AI development, and emphasizing consumer data privacy.
Keywords: Data protection, Fairness, algorithmic bias, artificial intelligence (AI), consumer privacy