Artificial Intelligence (AI) expert, Dr. Ikpongke Joshua, says achieving a robust, general-purpose AI alignment within the next five years will be a highly complex task.
Joshua stated this on Monday during an interview with reporters in Abuja.
AI alignment, he explained, refers to the process of ensuring that AI systems operate in ways that benefit humans and remain consistent with human goals, ethics, and preferences. He noted that this has become a major area of concern for technology experts as AI systems grow more autonomous, with the potential for unpredictable or harmful outcomes if not properly guided.
Joshua, who specializes in expert systems, said the biggest technical challenge lies in specifying and formalizing human values in a way AI systems can interpret.
“This is often referred to as the value alignment problem,” he said.
He added that from an ethical standpoint, ensuring AI systems align with human values—and do not reinforce existing biases or create new ones—remains a significant concern. Issues of accountability and transparency in AI decision-making also pose challenges, he said.
Joshua, who is the CEO of Protech Solutions Ltd., identified key areas requiring attention, including the development of more sophisticated formal methods for specifying and verifying AI goals and behavior.
He stressed the need to improve understanding of human values and how to translate them into computational terms.
Another critical issue, he said, is the potential for AI systems to manipulate or exploit their reward functions. He added that it is equally important to design AI systems with strong mechanisms for human oversight and control.

