Global Journal of Arts, Humanities and Social Sciences (GJAHSS)

EA Journals

humanity

Domination by Agentic AI: Lament from the Future (Published)

We humans have long been separated from ourselves (the inner alignment problem), and from meaningful high-quality relations with others and our own natural surroundings (the outer alignment problem). A subset of superintelligence technocrats and their wealthy investors have been driving even deeper wedges into these pre-existing divides. Their ambitiousness has gone unrestrained, as potentially life-threatening decisions made by the few then affect all the rest of humanity. Man-in-power finds it difficult to control himself, so it’s no wonder he is not always in control of his risky technological creations. Man is compulsively” doing,” yet at times he knows not exactly “what” he is actually doing. It is argued that direct involvement from scholars in the humanities and humanistic social sciences in confronting the agentic superintelligence alignment problem would have been prudent and wise, especially since suitable corporate, national, and international guardrails are lacking. It is also argued that we humans can no longer evade consciously evolving our own higher human nature and humane potentials.

Keywords: Pandora’s Box, agentic AI, hubris, humanity, superintelligence, technocracy

Scroll to Top

Don't miss any Call For Paper update from EA Journals

Fill up the form below and get notified everytime we call for new submissions for our journals.