The Two Paths in the Wood
Which path would you prefer to define your destiny and those of your children and future posterity? The critical role of AI in shaping human flourishing priorities.

Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;
Then took the other, as just as fair,
And having perhaps the better claim,
Because it was grassy and wanted wear;
Though as for that the passing there
Had worn them really about the same,
And both that morning equally lay
In leaves no step had trodden black.
Oh, I kept the first for another day!
Yet knowing how way leads on to way,
I doubted if I should ever come back.
I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.
The Road Less Traveled, by Robert Frost, 1915, first published in the Atlantic Monthly
In 2004, the authors attended an artificial intelligence conference in Panama City, Panama, where some of the world’s leading AI-centric proponents came together to discuss possibilities, problems, cautions, technologies, and ethics. Speakers variously touted awesome optimism for a transhuman future, others expressed grave fears for the same reason. We heard the voices of the technologists, the philosophers, the religionists, and the ethicists, each promoting their version of the future.

One of the most interesting conversations was not from a pulpit. It was with the PhD candidate son of one of the leading voices in the Panama forum. This intelligent, articulate, and quirky young man, (quirky, if only because his chosen headwear was Mickey Mouse ears), spoke to us of his doctoral thesis. He was trying to work through the herculean challenge of creating a unified ethics framework that would be broad enough to guide all major AI development. Not that he expected his thesis to create that kind of impossible unity, it was just a matter of exploration of how he was working through the mind bending challenges. Though the conversation was almost a year ago, that subject came back to mind because we recently gained access to the first draft of his attempt to define and codify an ethics framework adequate to shape machine learning. His contemplations and theoretical wrestlings include topics such as —
AI taking and executing decisions contrary to human wellbeing
AI being directed by humans who have malign motivations
AI causing a catastrophe as a result of an internal defect
Those and other profound topics surrounding AI led us to further contemplation of the almost inevitable divergence that will take place in goals around human flourishing. The shaping, the priorities, of AI-driven tools will be governed as much or more by existential philosophies than they will by short term pragmatism.
As noted in a report from today’s The Briefing, by Martin Peers, the tech giants are shaping up against each other and shaping the AI race …
[Such as] Zuckerberg’s recent decision to adopt a “community notes” feature for fact-checking within Meta’s apps, as in Musk’s X. Then there’s … Meta’s support of Musk’s challenge to OpenAI’s for-profit conversion. The days when these two [Musk and Zuckerberg] were talking about a cage match seem to be far in the rearview mirror. It may not be a coincidence that, unlike most tech CEOs, the two have unchallenged power at their companies. In keeping with that power, they’re both making giant, risky investments on AI-based products. That’s particularly true for Zuckerberg, who is turning on the money spout this year to make his Meta AI assistant the leader in the industry and make Meta’s Ray-Ban AI glasses an even bigger hit. Musk, for his part, is going all in on self-driving cars at Tesla, hoping to compete with Waymo in robotaxis.
Humanity stands at a defining moment.
Human Flourishing in the Age of AI: Diverging Roads at a Crossroads of Destiny
The emergence of artificial intelligence (AI) presents both a tremendous opportunity and an existential dilemma. As AI increasingly shapes economies, societies, and even individual lives, its trajectory is not neutral — it will be shaped by the worldview of those who design, fund, and deploy it. At the heart of this unfolding reality lie two fundamentally divergent visions of human destiny:
A secular, evolutionary transhumanist vision, where humanity’s ultimate goal is to transcend biological limitations through technology, leading to a post-human future.
A theistic vision, in which human life is a purposeful, divinely ordained journey with AI serving as a tool for flourishing within God's creation rather than an end in itself.
The programmers, funders, and policymakers behind AI development will largely determine which of these visions predominates. As AI integrates more deeply into governance, healthcare, education, and even personal relationships, the question is no longer whether AI will shape human flourishing, but rather what kind of flourishing will it enable — or hinder?
The Transhumanist Vision: AI as a Bridge to Post-Humanity
In a secular, atheistic worldview, there is no inherent meaning beyond what human beings construct for themselves. If there is no divine purpose, no soul, and no ultimate accountability, then human flourishing is measured in material, biological, and cognitive expansion — often through the lens of transhumanism. The transhumanists were very much in the foreground of the above-mentioned forum.
1. Defining Flourishing in a Transhumanist Future
From this perspective, flourishing is not about moral or spiritual growth, but about maximizing pleasure, longevity, and intelligence. AI is seen as an essential tool to:
Enhance human cognition – Merging AI with the brain (via neural interfaces) to eliminate memory, learning, and communication limitations.
Overcome biological fragility – Using AI-driven genetic engineering, prosthetics, and nanotechnology to push human lifespan toward immortality.
Achieve post-human evolution – Transcending human consciousness by merging it with machines, possibly leading to digital consciousness or AI-driven synthetic life.
Automate moral and ethical decision-making – Relying on AI to determine ethical norms, removing subjective and "outdated" religious moralities.
In this future, human identity dissolves into a data-driven entity, where the physical body becomes an obsolete container, and morality is redefined by computational efficiency rather than spiritual wisdom.
2. The Risks of This Vision
Dehumanization – If AI surpasses human intelligence and autonomy, what remains of human dignity? Would people still have intrinsic worth, or only utility?
Loss of Free Will – If AI controls critical decision-making (e.g., job allocation, governance, even personal relationships), do humans still exercise agency?
Erosion of Ethics – Without a transcendent moral anchor, AI-driven ethics could be dictated by corporate interests, state power, or collective pragmatism, potentially eliminating traditional human rights.
Spiritual Nihilism – If there is no afterlife and no divine accountability, why pursue virtues like self-sacrifice, compassion, and justice beyond self-interest?
While transhumanists argue that AI can liberate humanity, the final destination is unclear—is it a utopia of limitless potential, or a dystopia where the human soul (if it exists) is lost to the machine?
A Theistic Vision: AI as Servant of God’s Plan for Humanity
In contrast, the Judeo-Christian worldview sees human life as sacred — designed with a purpose beyond material existence. In this perspective, AI is not a path to transcendence but a tool for stewardship, aiding rather than replacing human creativity, moral responsibility, and divine destiny.
1. Defining Flourishing in a God-Oriented Future
Flourishing is not about escaping human limitations but fulfilling a God-given purpose:
AI as a tool for human dignity – Technology should alleviate suffering, expand education, and promote justice, not replace human relationships or decision-making.
Ethics rooted in divine wisdom – AI must be governed by moral frameworks based on timeless values — justice, mercy, love, truth — and with a Biblical rather than the philosophically shifting sands of cultural preferences.
Work and vocation enhanced, not eliminated – AI should complement human creativity, not render people obsolete. The dignity of work must remain central.
AI serving the common good – Rather than maximizing individual pleasure or profit, AI should be deployed for holistic well-being, fostering peace, family stability, and social cohesion.
In this vision, AI is a servant rather than a master—advancing human potential within God's moral boundaries rather than attempting to redefine or escape them.
2. Are There Risks in This Vision? Possibly.
Power struggles over AI ethics – Different faith traditions may interpret moral applications differently, leading to conflicts over AI governance.
Economic disparities – If AI is deployed under faith-driven stewardship, how do we ensure it benefits all people, not just religious elites or economically powerful groups?
Potential resistance to technological progress – If faith-driven ethics emphasize humility and contentment, could some necessary innovations (like AI in medicine) be hindered, not just governed by, moral caution?
While this vision grounds AI in a higher moral purpose, it must also balance innovation with ethical responsibility, ensuring technology serves humanity’s true destiny rather than acting as a mere safeguard against secularism.

The Decisive Role of AI Programmers and Funders
The future of AI will not be determined by technology alone but by the intentions of those who create and control it.
1. Who Funds AI?
The money behind AI development shapes its trajectory. If funded by profit-driven corporations, AI will be optimized for economic efficiency. If controlled by secular governments, AI will likely serve state-defined progressivism. If guided by faith-based institutions, AI could be designed with ethical boundaries prioritizing human dignity.
2. Who Writes AI’s Moral Code?
This is the very subject which led to this article, as outlined above. Ethics are now embedded into algorithms. If designed by transhumanists, AI ethics may reflect utilitarianism. If designed by religious thinkers, it may uphold absolute moral truths (such as the sanctity of life, freedom, and family values).
The Crossroads: Which Path Will We Choose?
AI is neither inherently good nor evil — it is a force multiplier that amplifies the values of its creators.
If shaped by secular transhumanists, AI may lead to a post-human, technocratic future where flourishing means escaping biological limits.
If shaped by God-centered stewardship, AI may enhance but never replace human dignity, ensuring technology remains an instrument of divine purpose rather than a substitute for it.
For believers, human flourishing is not about how far AI can take us, but about where God intends us to go. As stewards of creation, we must ensure that AI serves rather than enslaves — guiding it not toward digital immortality, but toward a world where wisdom, love, and justice remain the hallmarks of true human flourishing.
The question is no longer whether AI will shape our future — it is who will define the interim meaning of that future?
Postscript: The authors posted another story about the voices shaping the conversation about AI, with a particular emphasis on the faith voice of Fr. Philip Larrey. Click here to read that story.

