back

Revolutionizing AI Ethics: A Journey Toward Human-Centric Intelligence

Jan. 29, 2025. 6 mins. read. 5 Interactions

AI ethics has evolved from abstract philosophy to real-world urgency. But with bias, privacy, and superintelligence on the table, are we keeping up?

Credit: Tesfu Assefa

Introduction

The rise of artificial intelligence (AI) has sparked an equally transformative conversation about ethics, touching every corner of modern society. From privacy concerns to algorithmic fairness, AI ethics has emerged as a critical field, shaping how these technologies integrate into our lives. Over the past two decades, researchers have mapped its journey, identifying shifts in focus and persistent challenges. How have these developments reshaped our understanding of ethical AI? This article discusses the evolution of AI ethics, highlighting its milestones, dilemmas, and future directions.

The Phases of AI Ethics: From Theoretical to Human-Centric

Researchers have identified three pivotal phases marking the evolution of AI ethics. These phases reflect how ethical priorities have adapted alongside technological advances.

Figure 1. Usage of different ethics-related keywords in literature

Phase 1: The Incubation Era (2004–2013)

In its infancy, AI ethics revolved around foundational questions of machine behavior and personhood. Dispersed across fields like machine ethics and information ethics, research focused on ensuring AI systems could act ethically toward humans and other machines.

During this period, thinkers like Luciano Floridi and Wendell Wallach laid the groundwork for ethical AI frameworks. However, AI was largely viewed as a tool rather than an autonomous entity.

Phase 2: Human-Like Machines (2014–2019)

This phase introduced a shift from theoretical ethics to practical applications. Researchers prioritized endowing AI systems with traits like fairness, accountability, and transparency. Breakthroughs such as AlphaGo’s victory and Google Waymo’s Robotaxi showcased AI’s growing sophistication.

Industry leaders proposed ethical frameworks aimed at building trust in AI systems. These efforts set the stage for aligning AI with human-like principles, making accountability and trustworthiness central to AI design.

Phase 3: Human-Centric AI (2020–Present)

Today, the focus has shifted to ensuring AI systems address societal needs equitably and inclusively. Public concerns about bias, misinformation, and autonomy have driven researchers toward creating AI technologies that prioritize transparency and fairness.

Innovative tools like the Contextualized Embedding Association Test (CEAT) have emerged to measure biases in AI systems, while global discussions on AI sentience and superintelligence have sparked ethical safeguards. This phase underscores the importance of grounding AI ethics in human-centric values to rebuild public trust.

Persistent Challenges: The Ethical Dilemmas of AI

Despite advancements, key challenges continue to shape the discourse:

The Collingridge Dilemma

A core issue in AI ethics is the Collingridge Dilemma, which highlights the difficulty of controlling technological developments before their consequences become apparent. Early in AI’s development, regulation may seem unnecessary or stifle innovation. However, by the time AI’s risks become evident, altering its trajectory may be costly or even impossible. AI ethicists struggle to find a balance between innovation and precaution, making this an ongoing debate in governance frameworks.

Transparency and Explainability

AI decision-making remains a black box, often inaccessible even to its creators. Researchers have proposed approaches like explicability—a combination of transparency, accountability, and interpretability—but these solutions often face practical and financial constraints.

Privacy Concerns

AI-driven technologies increasingly blur the boundaries of personal data. Questions about how the “right to be forgotten” applies to AI systems reveal the complexities of governing data retention and usage in the AI era.

Bias and Fairness

AI systems can perpetuate societal biases present in training data. Tools like CEAT aim to mitigate these biases, but achieving equity requires systemic changes both within AI development and the societies AI serves.

Algocracy and Human Enfeeblement

The increasing reliance on AI-driven decision-making raises concerns about algocracy, where governance becomes dictated by algorithms rather than human oversight. As AI automates critical processes, it risks diminishing human agency, creating a society where decision-making power becomes concentrated among those who control AI systems. This phenomenon, known as human enfeeblement, has sparked debates about the need for regulatory interventions to ensure AI serves humanity rather than marginalizing it.

AI Status Debate

A major unresolved issue is whether AI should be classified as an ethical-impact agent, an implicit ethical agent, an explicit ethical agent, or a full ethical agent. Some scholars argue that AI should remain a tool with human accountability, while others explore the possibility of granting AI legal or moral status. The debate intensified when Saudi Arabia granted citizenship to the humanoid robot Sophia in 2017, raising questions about AI’s rights, responsibilities, and legal standing.

Superintelligence

The ethical risks of AI surpassing human intelligence remain a topic of heated debate. As industry leaders explore governance frameworks for superintelligence, balancing innovation with existential safety is becoming a priority.

Credit: Tesfu Assefa

Future Directions: Building Ethical AI Frameworks

To address these challenges, researchers propose innovative solutions like Large Ethics Models (LEMs), inspired by large language models. LEMs could create adaptive, culturally sensitive frameworks for ethical decision-making. Additionally, standardizing AI identification through registries and labeling systems could enhance accountability and transparency.

Large Ethics Models (LEMs)

Just as large language models (LLMs) have transformed AI’s ability to understand and generate text, Large Ethics Models (LEMs) have been proposed as a means to systematize ethical decision-making in AI. LEMs could integrate diverse ethical principles, allowing AI to make decisions aligned with human values across different cultures and industries. This emerging area requires further exploration to ensure its feasibility and effectiveness.

AI Identification

Unlike traditional software and hardware, AI lacks a standardized identification system to track and categorize its usage. Researchers suggest introducing AI IDs, similar to serial numbers, which could improve transparency by allowing consumers and regulators to verify an AI system’s origins, capabilities, and ethical safeguards. A central registry and voluntary rating system could help users understand the risks associated with various AI applications.

Interdisciplinary Collaboration

Interdisciplinary collaboration between psychology, sociology, and law is vital for tackling the ethical implications of AI technologies. By integrating these diverse perspectives, researchers can ensure AI serves humanity responsibly.

Conclusion

AI ethics is no longer just a theoretical exercise—it’s the compass guiding one of humanity’s most transformative journeys. From its early days of exploring machine morality to today’s push for human-centric design, the field has evolved to tackle some of the most pressing questions of our time. But the road ahead is just as critical. How do we ensure AI systems are transparent, fair, and inclusive while navigating the complexities of superintelligence?

The answer lies in bold innovation and collaboration, weaving together insights from psychology, sociology, law, and beyond. By grounding AI in values that truly reflect humanity’s best, we can create technologies that don’t just revolutionize industries but uplift societies. The challenge is monumental, but so is the opportunity to build a future where AI and humanity thrive together.

Reference

Gao, Di Kevin, Andrew Haverly, Sudip Mittal, Jiming Wu, and Jingdao Chen. “AI Ethics.” International Journal of Business Analytics 11, no. 1 (February 20, 2024): 1–19. https://doi.org/10.4018/ijban.338367.

Jacob Sparks and Ava Thomas Wright, “Models of Rational Agency in Human-Centered AI: The Realist and Constructivist Alternatives,” AI and Ethics 5 (2025). https://philpapers.org/rec/SPAMOR.

Kazim, Emre, and Adriano Soares Koshiyama. “A High-level Overview of AI Ethics.” Patterns 2, no. 9 (September 1, 2021): 100314. https://doi.org/10.1016/j.patter.2021.100314.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

About the Writer

Yeabsera

13.56153 MPXR

I'm an electrical and computer engineer with a big love for AI. As part of an AI ethics team at an international software company, I’m working hard to close the digital literacy gap globally and make sure tech is fair for all.

About the Editor

Emrakeb

6.11738 MPXR

I wish to craft more articles for Mindplex: my schedule’s pretty packed, though.

Comment on this article

0 Comments

0 thoughts on “Revolutionizing AI Ethics: A Journey Toward Human-Centric Intelligence

Related Articles

3

Like

Dislike

Share

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

1 People's Choice
Bookmarks