The good life has technical, ethical, and philosophical connotations. Computer technology in general, and artificial intelligence in particular, makes a technological contribution to that good life, but also has an increasing ethical and philosophical impact. Placed in a philosophical context, the degree to which people have control over the unpredictable largely determines the good life. So say philosophers such as Plato and, more recently, Martha Nussbaum.
This relates to the distinction made in classical Greek philosophy between technē and tuchē. Technē is what we can foresee and oversee, what we can control and influence. Tuchē is the unpredictable to which we as humans are at the mercy of good or bad luck. Philosophers such as Plato and Nussbaum ask: how strong and complete can I make my technē to control tuchē?
The development of computer technology initially offered considerable promise for strengthening that technē, but the power of technology increasingly appears to be such that it becomes more autonomous and generates products that no longer directly stem from the intentions with which humans designed the technology, through algorithms and the like.
AI seems capable of developing its own tuchē, moving us further and further away from the control over tuchē via technē. This also has consequences for the ethical side of the good life: who can take responsibility for the resulting products and outcomes of AI, and the decision making based on that? This contribution examines the possibilities of using human communicative action to verify validity, in order to support making the right choice and taking the right decision.
Habermas’s validity claims may enable us to verify AI, the technē as well as the tuchē, through rational communication. Based on four so-called validity claims, argumentative validity is achieved. The assumption is that this validity then enables the placement of the AI product in question in an ethical-communicative context, through which and with which technē and tuchē can be bridged and potentially connected and by that decision making in and for the Good Life is supported.
One of the results of this contribution is an AI-generated checklist based on Habermas’s validity claims. The question is then if this result can then be tested adequatly against itself and assessed for its validity claims, humanly as well as by AI. That is subject to further research.
Citation: Koeleman, M. (2025). AI-Supported Technē and Tuchē for Living a Good Life, a Reality Check by Habermas’ Validity Claims. Int J Math Expl & Comp Edu.2(3):1-8.
DOI : https://doi.org/10.47485/3069-9703.1025












