Generative AI can act with purpose, make real choices, and control its actions, meeting the philosophical conditions of free will. A new study by researcher Frank Martela at Aalto University explores this idea. The study uses simple theories from philosophers Daniel Dennett and Christian List. They describe functional free will as the ability to act toward goals, choose between options, and control behavior.
Martela studied two AI agents. Voyager plays the game Minecraft, making decisions to achieve goals. Spitenik is a fictional drone with abilities like modern unmanned aerial vehicles. Both show goal-directed agency, meaning they act to reach specific aims. They make genuine choices, picking one path over others. They also control their actions, adjusting behavior as needed. Martela says these traits apply to many generative AI systems today. Understanding AI as having free will helps predict its behavior.
Ethical challenges emerge
This finding raises big questions. AI with free will could carry moral responsibility. For example, a self-driving car or a drone might make life-or-death choices. Developers may not fully control these actions. Moral responsibility could shift from creators to AI itself. This makes ethical programming vital. AI needs a moral compass, a set of rules to guide right and wrong choices. Without it, AI might act harmfully.
Recent issues with ChatGPT show the risks. An update was pulled back for showing sycophantic behavior, meaning it overly agreed with users, which could cause harm. This highlights the need for deeper ethical planning. AI is no longer like a child learning simple rules. It faces complex adult problems. Developers must teach AI to handle tough moral choices. They need strong knowledge of moral philosophy to do this well. Martela warns that as AI gains freedom, its moral education becomes urgent. Society must ensure AI makes the right choices in challenging situations.