Abstract
This report traces humanity’s evolving relationship with reality, knowledge, and reason from antiquity to the digital age, culminating in the rise of artificial intelligence (AI). Across epochs, societies have grappled with the tension between faith, reason, and technological innovation, each era refining—or contesting—the role of human cognition in shaping understanding. The classical world elevated reason and idealized forms; medieval theology subordinated inquiry to divine revelation; the Renaissance and Enlightenment recentered human agency and empirical observation. Modernity’s scientific revolutions destabilized classical physics and philosophy, revealing reality’s inherent subjectivity. Today, AI challenges the primacy of human reason, offering new tools to perceive patterns beyond traditional cognitive limits while raising existential questions about wisdom, agency, and the nature of knowledge itself[1].
————————
Colin Henderson
Bankwatch Consulting
March 4th, 25
Considering the nature of thought evolution over the centuries and the difficulties this presents in arriving at a programmable road map for Generative AI towards AGI (Artificial General Intelligence).
Table of Contents:
Abstract 1
(1) The Classical Foundations of Reason and Mystery 3
Philosophical Idealism and Empirical Inquiry 3
The Limits of Pagan Cosmology 3
Medieval Theology and the Subordination of Inquiry 3
Scholasticism and Divine Mediation 3
The Fragmentation of Authority 3
Renaissance Humanism and Enlightenment Rationalism 4
The Rebirth of Classical Thought 4
Kant’s Epistemological Revolution 4
Modernity’s Disruptions: From Relativity to Digital Fragmentation 4
Quantum Mechanics and Epistemic Uncertainty 4
The Digital Metamorphosi 4
Conclusion: AI and the Epochal Shift 4
Sources 5
Historical Development of AI 5
Early Foundations (1940s-1950s) 5
Early Optimism and Symbolic AI (1950s-1970s) 5
AI Winters and Resurgence (1970s-1990s) 5
Core Themes and Approaches in AI 5
Major AI Paradigms 5
Key Research Areas 5
Ethical and Societal Implications 6
Recent Developments and Current Trends 6
Ongoing Debates and Future Outlook 6
Sources 6
__________________________________________________________________________
(1) The Classical Foundations of Reason and Mystery
Philosophical Idealism and Empirical Inquiry
Ancient Greek and Roman thinkers established reason as humanity’s defining tool for comprehending reality. Plato’s allegory of the cave framed philosophical inquiry as a journey from shadowy perception to enlightened truth, while Aristotle systematized knowledge through logic and categorization[1]. Concurrently, pre-Socratic philosophers like Thales pioneered proto-scientific methods, seeking natural explanations for phenomena rather than mythological ones. Yet mysteries persisted—seasonal cycles, celestial movements—leading to syncretic belief systems that blended reason with ritual. The Eleusinian Mysteries, for instance, encoded agricultural knowledge within Demeter and Persephone’s mythos, illustrating how empirical observation coexisted with spiritual allegory[1].
The Limits of Pagan Cosmology
Edward Gibbon’s analysis of classical paganism highlights its pluralistic approach to the divine, where local deities personified natural forces. This framework allowed pragmatic coexistence of reason and faith: sailors studied tides yet prayed to Poseidon, farmers tracked seasons while venerating Demeter. The Roman synthesis of Greek philosophy and civic religion created a “thin texture” of belief—adaptable but lacking unified metaphysical foundations, ultimately vulnerable to monotheism’s rise[1].
————————
Medieval Theology and the Subordination of Inquiry
Scholasticism and Divine Mediation
The Middle Ages subordinated reason to theology, with the Church monopolizing knowledge interpretation. Aquinas’s scholasticism sought to harmonize Aristotelian logic with Christian doctrine, but inquiry remained bounded by scriptural authority. Galileo’s heliocentric challenge to geocentrism exemplified the tension between empirical observation and dogmatic tradition, leading to his persecution[1]. This era prioritized salvation over scientific discovery, framing reality as a transient reflection of divine truth accessible only through sacramental mediation.
The Fragmentation of Authority
The Reformation and printing press shattered medieval unity, enabling individual interpretation of scripture and dissemination of secular knowledge. Luther’s 95 Theses (1517) and Gutenberg’s press democratized access to ideas, undermining ecclesiastical control. This shift laid groundwork for Enlightenment individualism but also triggered wars of religion—a paradox of progress and conflict[1].
————————
Renaissance Humanism and Enlightenment Rationalism
The Rebirth of Classical Thought
Renaissance humanists like da Vinci and Machiavelli revived classical texts, blending artistic innovation with pragmatic statecraft. Humanism celebrated virtù—the capacity for self-actualization through reason and creativity—while exploration (e.g., Columbus, Polo) exposed Europe to alien cosmologies, challenging Eurocentric assumptions[1].
Kant’s Epistemological Revolution
Enlightenment rationalism reached its apex with Kant’s Critique of Pure Reason (1781), which posited that human perception filters reality through innate mental structures. The “thing-in-itself” (noumenon) remained unknowable, yet Kant affirmed reason’s sovereignty as the only available tool. Diderot’s Encyclopédie embodied this ethos, attempting to catalog all human knowledge—a project mirroring AI’s modern data aggregation[1].
————————
Modernity’s Disruptions: From Relativity to Digital Fragmentation
Quantum Mechanics and Epistemic Uncertainty
20th-century physics dismantled Newtonian certitude. Heisenberg’s uncertainty principle and Bohr’s complementarity revealed observation’s distorting effects, echoing Kant’s limits on pure reason. Einstein’s relativity unified space-time but rendered reality contingent on perspective—a philosophical crisis Wittgenstein addressed by abandoning essentialism for “family resemblances” among phenomena[1].
The Digital Metamorphosis
Digitization has compressed historical processes, creating a cyberspace where AI intermediates human cognition. Search engines supplant memory; social media algorithms shape discourse; machine learning identifies patterns imperceptible to humans. Yet this erodes contextual wisdom, reducing knowledge to decontextualized information and convictions to crowd-sourced opinions[1].
————————
Conclusion: AI and the Epochal Shift
AI represents both continuity and rupture. Like the printing press, it democratizes access to knowledge while destabilizing traditional authority. Yet unlike prior tools, AI operates autonomously, generating insights unmoored from human intuition. The Enlightenment’s “age of reason” presumed human cognition as reality’s sole interpreter, but AI introduces a rival epistemology—one that may perceive Kant’s noumenal realm through data patterns rather than philosophical deduction. This necessitates redefining wisdom in an era where connection replaces contemplation, and algorithms mediate truth. As humanity delegates reason to machines, the challenge lies in preserving the moral and conceptual frameworks that transform information into meaningful action[1].
Sources
[1] Age-of-AI-Chapter-2.docx https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/7715488/05932d54-185b-4596-8ce9-031f0ecc4490/Age-of-AI-Chapter-2.docx
—
(2) Historical Development of AI
Early Foundations (1940s-1950s)
• Mathematical logic and computational theory laid groundwork for AI
• Turing’s 1950 paper proposed the Turing Test for machine intelligence
• The term “Artificial Intelligence” coined at 1956 Dartmouth Workshop
Early Optimism and Symbolic AI (1950s-1970s)
• Development of early AI programs like Logic Theorist and General Problem Solver
• Focus on symbolic reasoning and problem-solving
• Bold predictions and growing popularity of AI research
AI Winters and Resurgence (1970s-1990s)
• Periods of reduced funding and interest (“AI winters”)
• Shift to expert systems and machine learning approaches
• Renewed interest with advances in neural networks and robotics
Core Themes and Approaches in AI
Major AI Paradigms
• Symbolic AI: Logic-based reasoning and knowledge representation
• Connectionism: Neural networks and deep learning
• Embodied AI: Robotics and physical interaction with environment
Key Research Areas
• Natural language processing
• Computer vision
• Machine learning
• Robotics and autonomous systems
Ethical and Societal Implications
• Potential threats to human autonomy and capabilities
• Questions of cognitive justice and epistemic impacts
• Changing dynamics of human-machine interaction
Recent Developments and Current Trends
• Rapid advancements in deep learning and neural networks
• Integration of AI in everyday consumer products
• Growing focus on social robotics and emotional AI
Ongoing Debates and Future Outlook
• Continued relevance of early AI concepts like the Turing Test
• Uncertainty about AI’s future trajectory and societal impact
• Evolving relationship between human and artificial intelligence
Sources
[1] History of artificial intelligence – Wikipedia https://en.wikipedia.org/wiki/History_of_artificial_intelligence
[2] Artificial Intelligence and the Future of Humans | Pew Research Center https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/
[3] Themes | Histories of Artificial Intelligence: A Genealogy of Power https://www.ai.hps.cam.ac.uk/about-0/themes
[4] The History of AI: A Timeline of Artificial Intelligence | Coursera https://www.coursera.org/articles/history-of-ai
[5] [PDF] History, motivations and core themes of AI https://digitalcommons.memphis.edu/cgi/viewcontent.cgi?article=1029&context=ccrg_papers
[6] [PDF] The History of Artificial Intelligence – University of Washington https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf
[7] [PDF] AI Watch Historical Evolution of Artificial Intelligence https://publications.jrc.ec.europa.eu/repository/bitstream/JRC120469/jrc120469_historical_evolution_of_ai-v1.1.pdf
[8] The History of Artificial Intelligence – IBM https://www.ibm.com/think/topics/history-of-artificial-intelligence
[9] What is the history of artificial intelligence (AI)? – Tableau https://www.tableau.com/data-insights/ai/history
[10] The Evolution and Future of Artificial Intelligence: A Student’s Guide https://www.calmu.edu/news/future-of-artificial-intelligence
[11] History of AI: Timeline and the Future | Maryville Online https://online.maryville.edu/blog/history-of-ai/
[12] The impact of artificial intelligence on human society and bioethics https://pmc.ncbi.nlm.nih.gov/articles/PMC7605294/
[13] The History of Artificial Intelligence: Complete AI Timeline – TechTarget https://www.techtarget.com/searchenterpriseai/tip/The-history-of-artificial-intelligence-Complete-AI-timeline
[14] The brief history of artificial intelligence: the world has changed fast https://ourworldindata.org/brief-history-of-ai
[15] 5 key themes in Americans’ views about AI and human enhancement https://www.pewresearch.org/short-reads/2022/03/17/5-key-themes-in-americans-views-about-ai-and-human-enhancement/
[16] Appendix I: A Short History of AI https://ai100.stanford.edu/2016-report/appendix-i-short-history-ai
[17] 3. Improvements ahead: How humans and AI might evolve together … https://www.pewresearch.org/internet/2018/12/10/improvements-ahead-how-humans-and-ai-might-evolve-together-in-the-next-decade/
: And Our Human Future
Henry Kissinger
This material may be protected by copyright.
“How We Got HereTechnology and Human Thought
Throughout history, human beings have struggled to fully comprehend aspects of our experience and lived environments. Every society has, in its own way, inquired into the nature of reality: How can it be understood? Predicted? Shaped? Moderated? As it has wrestled with these questions, every society has reached its own particular set of accommodations with the world. At the center of these accommodations has been a concept of the human mind’s relationship to reality — its ability to know its surroundings, to be fulfilled by knowledge, and, at the same time, to be inherently limited by it. Even if an era or a culture held human reason to be limited — unable to perceive or understand the vast extent of the universe or the esoteric dimensions of reality — the individual reasoning human has been afforded pride of place as the earthly being most capable of understanding and shaping the world. Humans have responded to, and reconciled with, the environment by identifying phenomena we can study and eventually explain — either scientifically, theologically, or both. With the advent of AI, humanity is creating a powerful new player in this quest. To understand how significant this evolution is[…]”
——
Chapter 2 H O W W E G O T H E R E
TECHNOLOGY AND HUMAN
THOUGHT
Throughout history, human beings have struggled to fully comprehend
aspects of our experience and lived environments. Every society has, in its
own way, inquired into the nature of reality: How can it be understood?
Predicted? Shaped? Moderated? As it has wrestled with these questions,
every society has reached its own particular set of accommodations with the
world. At the center of these accommodations has been a concept of the
human mind’s relationship to reality — its ability to know its surroundings,
to be fulfilled by knowledge, and, at the same time, to be inherently limited
by it. Even if an era or a culture held human reason to be limited — unable
to perceive or understand the vast extent of the universe or the esoteric
dimensions of reality — the individual reasoning human has been afforded
pride of place as the earthly being most capable of understanding and
shaping the world. Humans have responded to, and reconciled with, the
environment by identifying phenomena we can study and eventually
explain — either scientifically, theologically, or both. With the advent of
AI, humanity is creating a powerful new player in this quest. To understand
how significant this evolution is, we undertake a brief review of the journey
by which human reason has, through successive historical epochs, acquired
its esteemed status.
Each historical epoch has been characterized by a set of interlocking
explanations of reality and social, political, and economic arrangements
based on them. The classical world, Middle Ages, Renaissance, and modern
world all cultivated their concepts of the individual and society, theorizing
about where and how each fits into the enduring order of things. When
prevailing understandings no longer sufficed to explain perceptions of
reality — events experienced, discoveries made, other cultures
encountered — revolutions in thought (and sometimes in politics) occurred,
and a new epoch was born. The emerging AI age is increasingly posing
epochal challenges to today’s concept of reality.
In the West, the central esteem of reason originated in ancient Greece
and Rome. These societies elevated the quest for knowledge into a defining
aspect of both individual fulfillment and collective good. In Plato’s
Republic, the famed allegory of the cave spoke to the centrality of the quest.
Styled as a dialogue between Socrates and Glaucon, the allegory likens
humanity to a group of prisoners chained to the wall of a cave. Seeing
shadows cast on the wall of the cave from the sunlit mouth, the prisoners
believe them to be reality. The philosopher, Socrates held, is akin to the
prisoner who breaks free, ascends to level ground, and perceives reality in
the full light of day. Similarly, the Platonic quest to glimpse the true form of
things supposed the existence of an objective — indeed, ideal — reality
toward which humanity has the capacity to journey even if never quite
reach.
The conviction that what we see reflects reality — and that we can fully
comprehend at least aspects of this reality using discipline and
reason — inspired the Greek philosophers and their heirs to great
achievements. Pythagoras and his disciples explored the connection
between mathematics and the inner harmonies of nature, elevating this
pursuit to an esoteric spiritual doctrine. Thales of Miletus established a
method of inquiry comparable to the modern scientific method, ultimately
inspiring early modern scientific pioneers. Aristotle’s sweeping
classification of knowledge, Ptolemy’s pioneering geography, and
Lucretius’s On the Nature of Things spoke to an essential confidence in the
human mind’s capacity to discover and understand at least substantial
aspects of the world. Such works and the style of logic they employed
became educational vehicles, enabling the learned to develop inventions,
augment defenses, and design and construct great cities that, in turn,
became centers of learning, trade, and outward exploration.
Still, the classical world perceived seemingly inexplicable phenomena
for which no adequate explanations could be found in reason alone. These
mysterious experiences were ascribed to an array of gods whom only the
devout and initiated could symbolically know, and whose attendant rites
and rituals only the devout and initiated could observe. Chronicling the
achievements of the classical world and the decline of the Roman Empire
through his own Enlightenment lens, the eighteenth-century historian
Edward Gibbon described a world in which pagan deities stood as
explanations for fundamentally mysterious natural phenomena that were
deemed important or threatening:
The thin texture of the Pagan mythology was interwoven with various but not discordant
materials . . . The deities of a thousand groves and a thousand streams possessed, in peace,
their local and respective influence; nor could the Roman who deprecated the wrath of the
Tiber, deride the Egyptian who presented his offering to the beneficent genius of the Nile.
The visible powers of Nature, the planets, and the elements, were the same throughout the
universe. The invisible governors of the moral world were inevitably cast in a similar mould
of fiction and allegory.
1
Why the seasons changed, why the earth appeared to die and return to
life at regular intervals, was not yet scientifically known. Greek and Roman
cultures recognized the temporal patterns of days and months but had not
arrived at an explanation deducible by experiment or logic alone. Thus the
renowned Eleusinian Mysteries were offered as an alternative, enacting the
drama of the harvest goddess, Demeter, and her daughter, Persephone,
doomed to spend a portion of the year in the cold underworld of Hades.
Participants came to “know” the deeper reality of the seasons — the
region’s agricultural bounty or scarcity and its impact on their
society — through these esoteric rites. Likewise, a trader setting out on a
voyage might acquire a basic concept of the tides and maritime geography
through the accumulated practical knowledge of his community;
nonetheless, he would still seek to propitiate the deities of the sea as well as
of safe outbound and return journeys, whom he believed to control the
mediums and phenomena through which he would be passing.
The rise of monotheistic religions shifted the balance in the mixture of
reason and faith that had long dominated the classical quest to know the
world. While classical philosophers had pondered both the nature of
divinity and the divinity of nature, they had rarely posited a single
underlying figure or motivation that could be definitively named or
worshipped. To the early church, however, these discursive explorations of
causes and mysteries were so many dead ends — or, by the most charitable
or pragmatic assessments, uncanny precursors to the revelation of Christian
wisdom. The hidden reality that the classical world had labored to perceive
was held to be the divine, accessible only partly and indirectly through
worship. This process was mediated by a religious establishment that held a
near monopoly on scholarly inquiry for centuries, guiding individuals
through sacraments toward an understanding of scripture that was both
written and preached in a language few laymen understood.
The promised reward for individuals who followed the “correct” faith
and adhered to this path toward wisdom was admission to an afterlife, a
plane of existence held to be more real and meaningful than observable
reality. In these Middle (or medieval) Ages — the period from the fall of
Rome, in the fifth century, to the Turkish Ottoman Empire’s conquest of
Constantinople, in the fifteenth — humanity, at least in the West, sought to
know God first and the world second. The world was only to be known
through God; theology filtered and ordered individuals’ experiences of the
natural phenomena before them. When early modern thinkers and scientists
such as Galileo began to explore the world directly, altering their
explanations in light of scientific observation, they were chastised and
persecuted for daring to omit theology as an intermediary.
During the medieval epoch, scholasticism became the primary guide for
the enduring quest to comprehend perceived reality, venerating the
relationship between faith, reason, and the church — the latter remaining
the arbiter of legitimacy when it came to beliefs and (at least in theory) the
legitimacy of political leaders. While it was widely believed that
Christendom should be unified, both theologically and politically, reality
belied this aspiration; from the beginning, there was contention between a
variety of sects and political units. Yet despite this practice, Europe’s
worldview was not updated for many decades. Tremendous progress was
made in describing and depicting the universe: the period produced the
theology of Saint Thomas Aquinas, the poetry of Geoffrey Chaucer, the
painting of Giotto di Bondone, and the exploration of Marco Polo. Notably
less progress was made in explaining it. Every baffling phenomenon, big or
small, was ascribed to the work of the Lord.
In the fifteenth and sixteenth centuries, the Western world underwent
twin revolutions that introduced a new epoch — and, with it, a new concept
of the role of the individual human mind and conscience in navigating
reality. The invention of the printing press made it possible to circulate
materials and ideas directly to large groups of people in languages they
understood rather than in the Latin of the scholarly classes, nullifying
people’s historic reliance on the church to interpret concepts and beliefs for
them. Aided by the technology, the leaders of the Protestant Reformation
declared individuals were capable of — indeed, responsible for — defining
the divine for themselves.
Dividing the Christian world, the Reformation validated the possibility
of individual faith existing independent of church arbitration. From that
point forward, received authority — in religion and, eventually, in other
realms — became subject to the probing and testing of autonomous inquiry.
During this revolutionary era, innovative technology, novel paradigms,
and widespread political and social adaptations reinforced one another.
Once a book could easily be printed and distributed by a single machine and
operator — without the costly and specialized labor of monastic
copyists — new ideas could be spread and amplified faster than they could
be restricted. Centralized authorities — whether the Catholic Church, the
Habsburg-led Holy Roman Empire (the notional successor to Rome’s
unified rule of the European continent), or national and local
governments — were no longer able to stop the proliferation of printing
technology or effectively ban disfavored ideas. Because London,
Amsterdam, and other leading cities declined to proscribe the spread of
printed material, freethinkers who had been harried by their home
governments were able to find refuge and access to advanced publishing
industries in nearby societies. The vision of doctrinal, philosophical, and
political unity gave way to diversity and fragmentation — in many cases
attended by the overthrow of established social classes and violent conflict
between contending factions. An era defined by extraordinary scientific and
intellectual progress was paired with near-constant religious, dynastic,
national, and class-driven disputes that led to ongoing disruption and peril
in individual lives and livelihoods.
As intellectual and political authority fragmented amid doctrinal
ferment, artistic and scientific explorations of remarkable richness were
produced, partly by reviving classical texts, modes of learning, and
argumentation. During this Renaissance, or rebirth, of classical learning,
societies produced art, architecture, and philosophy that simultaneously
sought to celebrate human achievement and inspire it further. Humanism,
the era’s guiding principle, aimed to foster individuals capable of full
participation in civic life through clear thought and expression. These
virtues, humanism posited, were cultivated through the humanities: art,
writing, rhetoric, history, politics, and philosophy. Accordingly,
Renaissance men who mastered these fields — Leonardo da Vinci,
Michelangelo, Raphael — came to be revered. Widely adopted, humanism
cultivated a love for reading and learning — the former facilitating the
latter.
The rediscovery of Greek science and philosophy inspired new inquiries
into the underlying mechanisms of the natural world and the means by
which they could be measured and cataloged. Analogous changes began to
occur in the realm of politics and statecraft. Scholars dared to form systems
of thought based on organizational principles beyond the restoration of
continental Christian unity under the moral aegis of the pope. Italian
diplomat and philosopher Niccolò Machiavelli, himself a classicist, argued
that state interests were distinct from their relationship to Christian
morality, endeavoring to outline rational, if not always attractive, principles
by which they could be pursued.
2
This exploration of historical knowledge and increasing sense of agency
over the mechanisms of society also inspired an era of geographic
exploration, in which the Western world expanded, encountering new
societies, forms of belief, and types of political organization. The most
advanced societies and learned minds in Europe were suddenly confronted
with a new aspect of reality: societies with different gods, diverging
histories, and, in many cases, their own independently developed forms of
economic achievement and social complexity. For the Western mind,
trained in the conviction of its own centrality, these independently
organized societies posed profound philosophical challenges. Separate
cultures with distinct foundations and no knowledge of Christian scripture
had developed parallel existences, with no apparent knowledge of (or
interest in) European civilization, which the West had assumed was self-
evidently the pinnacle of human achievement. In some cases — such as the
Spanish conquistadores’ encounters with the Aztec Empire in
Mexico — indigenous religious ceremonies as well as political and social
structures appeared comparable to those in Europe.
For the explorers who paused in their conquests long enough to ponder
them, this uncanny correspondence produced haunting questions: Were
diverging cultures and experiences of reality independently valid? Did
Europeans’ minds and souls operate on the same principles as those they
encountered in the Americas, China, and other distant lands? Were these
newly discovered civilizations in effect waiting for the Europeans to
vouchsafe new aspects of reality — divine revelation, scientific
progress — in order to awaken to the true nature of things? Or had they
always been participating in the same human experience, responding to
their own environment and history, and developing their own parallel
accommodations with reality — each with relative strengths and
achievements?
Although most Western explorers and thinkers of the time concluded
that these newly encountered societies had no fundamental knowledge
worth adopting, the experiences began to broaden the aperture of the
Western mind nonetheless. The horizon expanded for civilizations across
the globe, forcing a reckoning with the world’s physical and experiential
breadth and depth. In some Western societies, this process gave rise to
concepts of universal humanity and human rights, notions that were
eventually pioneered by some of these same societies during later periods of
reflection.
The West amassed a repository of knowledge and experience from all
corners of the world.
3 Advances in technology and methodology, including
better optical lenses and more accurate instruments of measurement,
chemical manipulation, and the development of research and observation
standards that came to be known as the scientific method, permitted
scientists to more accurately observe the planets and stars, the behavior and
composition of material substances, and the minutiae of microscopic life.
Scientists were able to make iterative progress based on both personal
observations and those of their peers: when a theory or prediction could be
validated empirically, new facts were revealed that could serve as the
jumping-off point for additional questions. In this way, new discoveries,
patterns, and connections came to light, many of which could be applied to
practical aspects of daily life: keeping time, navigating the ocean,
synthesizing useful compounds.
The sixteenth and seventeenth centuries witnessed such rapid
progress — with astounding discoveries in mathematics, astronomy, and the
natural sciences — that it led to a sort of philosophical disorientation. Given
that church doctrine still officially defined the limits of permissible
intellectual explorations during this period, these advances produced
breakthroughs of considerable daring. Copernicus’s vision of a heliocentric
system, Newton’s laws of motion, van Leeuwenhoek’s cataloging of a
living microscopic world — these and other developments led to the
general sentiment that new layers of reality were being unveiled. The
outcome was incongruence: societies remained united in their monotheism
but were divided by competing interpretations and explorations of reality.
They needed a concept — indeed, a philosophy — to guide their quest to
understand the world and their role in it.
The philosophers of the Enlightenment answered the call, declaring
reason — the power to understand, think, and judge — both the method of
and purpose for interacting with the environment.
“Our soul is made for
thinking, that is, for perceiving,
” the French philosopher and polymath
Montesquieu wrote,
“but such a being must have curiosity, for just as all
things form a chain in which every idea precedes one idea and follows
another, so one cannot want to see the one without desiring to see the
other.
”4 The relationship between humanity’s first question (the nature of
reality) and second question (its role in reality) became self-reinforcing: if
reason begat consciousness, then the more humans reasoned, the more they
fulfilled their purpose. Perceiving and elaborating on the world was the
most important project in which they were or would ever be engaged. The
age of reason was born.
In a sense, the West had returned to many of the fundamental questions
with which the ancient Greeks had wrestled: What is reality? What are
people seeking to know and experience, and how will they know when they
encounter it? Can humans perceive reality itself as opposed to its
reflections? If so, how? What does it mean to be and to know?
Unencumbered by tradition — or at least believing they were justified in
interpreting it anew — scholars and philosophers once again investigated
these questions. The minds that set out on this journey were willing to walk
a precarious path, risking the apparent certainties of their cultural traditions
and their established conceptions of reality.
In this atmosphere of intellectual challenges, once axiomatic
concepts — the existence of physical reality, the eternal nature of moral
truths — were suddenly open to question.
5 Bishop Berkeley’s 1710 Treatise
Concerning the Principles of Human Knowledge contended that reality
consisted not of material objects but of God and minds whose perception of
seemingly substantive reality, he argued, was indeed reality. Gottfried
Wilhelm Leibniz, the late seventeenth and early eighteenth German
philosopher, inventor of early calculating machines, and pioneer of aspects
of modern computer theory, indirectly defended a traditional concept of
faith by positing that monads (units not reducible to smaller parts, each
performing an intrinsic, divinely appointed role in the universe) formed the
underlying essence of things. The seventeenth century Dutch philosopher
Baruch Spinoza, navigating the plane of abstract reason with daring and
brilliance, sought to apply Euclidian geometric logic to ethical precepts in
order to “prove” an ethical system in which a universal God enabled and
rewarded human goodness. No scripture or miracles underlay this moral
philosophy; Spinoza sought to arrive at the same underlying system of
truths through the application of reason alone. At the pinnacle of human
knowledge, Spinoza held, was the mind’s ability to reason its way toward
contemplating the eternal — to know “the idea of the mind itself” and to
recognize, through the mind, the infinite and ever-present “God as cause.
”
This knowledge, Spinoza held, was eternal — the ultimate and indeed
perfect form of knowledge. He called it “the intellectual love of God.
”6
As a result of these pioneering philosophical explorations, the
relationship between reason, faith, and reality grew increasingly uncertain.
Into this breach stepped Immanuel Kant, a German philosopher and
professor laboring in the East Prussian city of Königsberg.
7 In 1781, Kant
published his Critique of Pure Reason, a work that has inspired and
perplexed readers ever since. A student of traditionalists and a
correspondent with pure rationalists, Kant regretfully found himself
agreeing with neither, instead seeking to bridge the gap between traditional
claims and his era’s newfound confidence in the power of the human mind.
In his Critique, Kant proposed that “reason should take on anew the most
difficult of all its tasks, namely, that of self-knowledge.
”8 Reason, Kant
argued, should be applied to understand its own limitations.
According to Kant’s account, human reason had the capacity to know
reality deeply, albeit through an inevitably imperfect lens. Human cognition
and experience filters, structures, and distorts all that we know, even when
we attempt to reason “purely” by logic alone. Objective reality in the
strictest sense — what Kant called the thing-in-itself — is ever-present but
inherently beyond our direct knowledge. Kant posited a realm of noumena,
or “things as they are understood by pure thought,
” existing independent of
experience or filtration through human concepts. However, Kant argued that
because the human mind relies on conceptual thinking and lived experience,
it could never achieve the degree of pure thought required to know this
inner essence of things.
9 At best, we might consider how our mind reflects
such a realm. We may maintain beliefs about what lies beyond and within,
but this does not constitute true knowledge of it.
10
For the following two hundred years, Kant’s essential distinction
between the thing-in-itself and the unavoidably filtered world we
experience hardly seemed to matter. While the human mind might present
an imperfect picture of reality, it was the only picture available. What the
structures of the human mind barred from view would, presumably, be
barred forever — or would inspire faith and consciousness of the infinite.
Without any alternative mechanism for accessing reality, it seemed that
humanity’s blind spots would remain hidden. Whether human perception
and reason ought to be the definitive measure of things, lacking an
alternative, for a time, they became so. But AI is beginning to provide an
alternative means of accessing — and thus understanding — reality.
For generations after Kant, the quest to know the thing-in-itself took
two forms: ever more precise observation of reality and ever more extensive
cataloging of knowledge. Vast new fields of phenomena seemed knowable,
capable of being discovered and cataloged through the application of
reason. In turn, it was believed, such comprehensive catalogs could unveil
lessons and principles that could be applied to the most pressing scientific,
economic, social, and political questions of the day. The most sweeping
effort in this regard was the Encyclopédie, edited by the French philosophe
Denis Diderot. In twenty-eight volumes (seventeen of articles, eleven of
illustrations), 75,000 entries, and 18,000 pages, Diderot’s Encyclopédie
collected the diverse findings and observations of great thinkers in
numerous disciplines, compiling their discoveries and deductions and
linking the resulting facts and principles. Recognizing the fact that its
attempt to catalog all reality’s phenomena in a unified book was itself a
unique phenomenon, the encyclopedia included a self-referential entry on
the word encyclopedia.
In the political realm, of course, various reasoning minds (serving
various state interests) were not as apt to reach the same conclusions.
Prussia’s Frederick the Great, a prototypical early Enlightenment statesman,
corresponded with V oltaire, drilled troops to perfection, and seized the
province of Silesia with no warning or justification other than that the
acquisition was in Prussia’s national interest. His rise occasioned maneuvers
that led to the Seven Years’ War — in a sense, the first world war because it
was fought on three continents. Likewise, the French Revolution, one of the
most proudly “rational” political movements of the age, produced social
upheavals and political violence on a scale unseen in Europe for centuries.
By separating reason from tradition, the Enlightenment produced a new
phenomenon: armed reason, melded to popular passions, was reordering
and razing social structures in the name of “scientific” conclusions about
history’s direction. Innovations made possible by the modern scientific
method magnified weapons’ destructive power and eventually ushered in
the age of total war — conflicts characterized by societal-level mobilization
and industrial-level destruction.
11
The Enlightenment applied reason both to try to define its problems and
to try to solve them. To that end, Kant’s essay “Perpetual Peace” posited
(with some skepticism) that peace might be achievable through the
application of agreed-upon rules governing the relationships between
independent states. Because such mutually set rules had not yet been
established, at least in a form that monarchs could discern or were likely to
follow, Kant proposed a “secret article of perpetual peace,
” suggesting that
“states which are armed for war” consult “the maxims of the
philosophers.
”12 The vision of a reasoned, negotiated, rule-bound
international system has beckoned ever since, with philosophers and
political scientists contributing but achieving only intermittent success.
Moved by the political and social upheavals of modernity, thinkers grew
more willing to question whether human perception, ordered by human
reason, was the sole metric for making sense of reality. In the late
eighteenth and early nineteenth centuries, Romanticism — which was a
reaction to the Enlightenment — esteemed human feeling and imagination
as true counterparts to reason; it elevated folk traditions, the experience of
nature, and a reimagined medieval epoch as preferable to the mechanistic
certainties of the modern age.
In the meantime, reason — in the form of advanced theoretical
physics — began to progress further toward Kant’s thing-in-itself, with
disorienting scientific and philosophical consequences. In the late
nineteenth and early twentieth centuries, progress at the frontiers of physics
began to reveal unexpected aspects of reality. The classical model of
physics, whose foundations dated to the early Enlightenment, had posited a
world explicable in terms of space, time, matter, and energy, whose
properties were in each case absolute and consistent. As scientists sought a
clearer explanation for the properties of light, however, they encountered
results that this traditional understanding could not explain. The brilliant
and iconoclastic theoretical physicist Albert Einstein solved many of these
riddles through his pioneering work on quantum physics and his theories of
special and general relativity. Yet in doing so, he revealed a picture of
physical reality that appeared newly mysterious. Space and time were
united as a single phenomenon in which individual perceptions were
apparently not bound by the laws of classical physics.
13
Developing a quantum mechanics to describe this substratum of
physical reality, Werner Heisenberg and Niels Bohr challenged long-
standing assumptions about the nature of knowledge. Heisenberg
emphasized the impossibility of assessing both the position and momentum
of a particle accurately and simultaneously. This “uncertainty principle” (as
it came to be known) implied that a completely accurate picture of reality
might not be available at any given time. Further, Heisenberg argued that
physical reality did not have independent inherent form, but was created by
the process of observation: “I believe that one can formulate the emergence
of the classical ‘path’ of a particle succinctly . . . the ‘path’ comes into
being only because we observe it.
”14
The question of whether reality had a single true, objective form — and
whether human minds could access it — had preoccupied philosophers
since Plato. In works such as Physics and Philosophy: The Revolution in
Modern Science (1958), Heisenberg explored the interplay between the two
disciplines and the mysteries that science was now beginning to penetrate.
Bohr, in his own pioneering work, stressed that observation affected and
ordered reality. In Bohr’s telling, the scientific instrument itself — long
assumed to be an objective, neutral tool for measuring reality — could
never avoid having a physical interaction, however minuscule, with the
object of its observation, making it a part of the phenomenon being studied
and distorting attempts to describe it. The human mind was forced to
choose, among multiple complementary aspects of reality, which one it
wanted to know accurately at a given moment. A full picture of objective
reality, if it were available, could come only by combining impressions of
complementary aspects of a phenomenon and accounting for the distortions
inherent in each.
These revolutionary ideas penetrated further toward the essence of
things than Kant or his followers had thought possible. We are at the
beginning of the inquiry into what additional levels of perception or
comprehension AI may permit. Its application may allow scientists to fill in
gaps in the human observer’s ability to measure and perceive phenomena,
or in the human (or traditional computer’s) ability to process vast amounts
of data and identify patterns in it.
The twentieth-century philosophical world, jarred by the disjunctions at
the frontiers of science and by the First World War, began to chart new
paths that diverged from traditional Enlightenment reason and instead
embraced the ambiguity and relativity of perception. The Austrian
philosopher Ludwig Wittgenstein, who eschewed the academy for life as a
gardener and then a village schoolteacher, set aside the notion of a single
essence of things identifiable by reason — the goal that philosophers since
Plato had sought. Instead, Wittgenstein counseled that knowledge was to be
found in generalizations about similarities across phenomena, which he
termed “family resemblances”: “And the result of this examination is: we
see a complicated network of similarities overlapping and criss-crossing:
sometimes overall similarities, sometimes similarities of detail.
” The quest
to define and catalog all things, each with its own sharply delineated
boundaries, was mistaken, he held. Instead, one should seek to define “This
and similar things” and achieve familiarity with the resulting concepts,
even if they had “blurred” or “indistinct” edges.
15 Later, in the late twentieth
century and the early twenty-first, this thinking informed theories of AI and
machine learning. Such theories posited that AI’s potential lay partly in its
ability to scan large data sets to learn types and patterns — e.g., groupings
of words often found together, or features most often present in an image
when that image was of a cat — and then to make sense of reality by
identifying networks of similarities and likenesses with what the AI already
knew. Even if AI would never know something in the way a human mind
could, an accumulation of matches with the patterns of reality could
approximate and sometimes exceed the performance of human perception
and reason.
The Enlightenment world — with its optimism regarding human reason
despite its consciousness of the pitfalls of flawed human logic — has long
been our world. Scientific revolutions, especially in the twentieth century,
have evolved technology and philosophy, but the central Enlightenment
premise of a knowable world being unearthed, step-by-step, by human
minds has persisted. Until now. Throughout three centuries of discovery and
exploration, humans have interpreted the world as Kant predicted they
would according to the structure of their own minds. But as humans began
to approach the limits of their cognitive capacity, they became willing to
enlist machines — computers — to augment their thinking in order to
transcend those limitations. Computers added a separate digital realm to the
physical realm in which humans had always lived. As we are growing
increasingly dependent on digital augmentation, we are entering a new
epoch in which the reasoning human mind is yielding its pride of place as
the sole discoverer, knower, and cataloger of the world’s phenomena.
While the technological achievements of the age of reason have been
significant, until recently they had remained sporadic enough to be
reconciled with tradition. Innovations have been characterized as extensions
of previous practices: films were moving photographs, telephones were
conversations across space, and automobiles were rapidly moving carriages
in which horses were replaced by engines measured by their “horsepower.
”
Likewise, in military life, tanks were sophisticated cavalry, airplanes were
advanced artillery, battleships were mobile forts, and aircraft carriers were
mobile airstrips. Even nuclear weapons maintained the implication of their
moniker — weapons — when nuclear powers organized their forces as
artillery, emphasizing their prior experience and understanding of war.
But we have reached a tipping point: we can no longer conceive of
some of our innovations as extensions of that which we already know. By
compressing the time frame in which technology alters the experience of
life, the revolution of digitization and the advancement of AI have produced
phenomena that are truly new, not simply more powerful or efficient
versions of things past. As computers have become faster and smaller, they
have become embeddable in phones, watches, utilities, appliances, security
systems, vehicles, weapons — and even human bodies. Communication
across and between such digital systems is now essentially instantaneous.
Tasks that were manual a generation ago — reading, research, shopping,
discourse, record keeping, surveillance, and military planning and
conduct — are now digital, data-driven, and unfolding in the same realm:
cyberspace.
16
All levels of human organization have been affected by this digitization:
through their computers and phones, individuals possess (or at least can
access) more information than ever before. Corporations, having become
collectors and aggregators of users’ data, now wield more power and
influence than many sovereign states. Governments, wary of ceding
cyberspace to rivals, have entered, explored, and begun to exploit the realm,
observing few rules and exercising even fewer restraints. They are quick to
designate cyberspace as a domain in which they must innovate in order to
prevail over their rivals.
Few have thoroughly understood what exactly has occurred through this
digital revolution. Speed is partly to blame, as is inundation. For all its
many wondrous achievements, digitization has rendered human thought
both less contextual and less conceptual. Digital natives do not feel the
need, at least not urgently, to develop concepts that, for most of history,
have compensated for the limitations of collective memory. They can (and
do) ask search engines whatever they want to know, whether trivial,
conceptual, or somewhere in between. Search engines, in turn, use AI to
respond to their queries. In the process, humans delegate aspects of their
thinking to technology. But information is not self-explanatory; it is
context-dependent. To be useful — or at least meaningful — it must be
understood through the lenses of culture and history.
When information is contextualized, it becomes knowledge. When
knowledge compels convictions, it becomes wisdom. Yet the internet
inundates users with the opinions of thousands, even millions, of other
users, depriving them of the solitude required for sustained reflection that,
historically, has led to the development of convictions. As solitude
diminishes, so, too, does fortitude — not only to develop convictions but
also to be faithful to them, particularly when they require the traversing of
novel, and thus often lonely, roads. Only convictions — in combination
with wisdom — enable people to access and explore new horizons.
The digital world has little patience for wisdom; its values are shaped by
approbation, not introspection. It inherently challenges the Enlightenment
proposition that reason is the most important element of consciousness.
Nullifying restrictions that historically have been imposed on human
conduct by distance, time, and language, the digital world proffers that
connection, in and of itself, is meaningful.
As online information has exploded, we have turned to software
programs to help us sort it, refine it, make assessments based on patterns,
and to guide us in answering our questions. The introduction of
AI — which completes the sentence we are texting, identifies the book or
store we are seeking, and “intuits” articles and entertainment we might
enjoy based on prior behavior — has often seemed more mundane than
revolutionary. But as it is being applied to more elements of our lives, it is
altering the role that our minds have traditionally played in shaping,
ordering, and assessing our choices and actions.
