Leibnizian Musings on Intellectual Automation
Software has taken over our lives. From the phones we carry, the planes we fly, and the cars we drive, more aspects of our lives are quantified and regulated by algorithms. While there is much to be said in favor of intellectual automation, it also presents significant drawbacks that should not simply be dismissed out of hand. While we tend to think of technology as neutral, its effects are powerful and far-reaching. As author Nicholas Carr sets out to demonstrate in much of his work, “automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers.” And yet intellectual automation has become synonymous with progress and hailed as a panacea to all of mankind’s problems. We have embraced it unquestioningly. Technology companies have long held out the promise of technological utopias that would emancipate mankind through the delegation of reasoning processes. Such dreams in themselves are nothing new: the German philosopher and mathematician Gottfried Wilhelm Leibniz (1646-1716), arguably the precursor of the digital age, already evoked its prospect in the seventeenth century. For this reason, a reconsideration of Leibniz’s comments is timelier than ever and may help temper our unmitigated enthusiasm for intellectual automation. Even though he wrote more than three hundred years ago, his insight remains strikingly pertinent.
※
Intellectual automation, when it does not simply take over our thought processes, actively remodels them and rewires our brains to crave instant gratification. Whereas past technologies were aimed at overcoming our natural state of distraction, current ones either exacerbate it or flatten our minds, seizing our attention only to scatter it. With intellectual automation, mental exertion has given way to the blind following of instructions – we are no longer trusted to our own devices but perceived as slow, unreliable and inefficient. Software no longer aids the human intellect, but increasingly seeks to dispense with it altogether, often providing its users with ready-made answers, such as pre-programmed text dialogue, and even taking over ethical decisions.
Our enthusiasm for intellectual automation actually trades on a misconception, that of the basic equivalence between human thought processes and pure computation. This fallacy is generally coupled with the belief that by delegating tasks to routine chores, we are in turn freeing up mental space for higher pursuits. But by measuring human mental abilities against the cold logic and precision of algorithms, we are erroneously equating two very different types of thought processes, reducing and inevitably devaluing the former in favor of the latter. Against the speed and precision of computation, human intelligence appears time-consuming, inefficient and imprecise. As Carr claims, “We exaggerate the abilities of computers as we give our own talents short shrift. We are not doing justice to our uniquely human abilities.”
With every additional app, we are placing a filter between ourselves and the world
Not only that, but software also operates at an entirely different level from us and at a pace with which we cannot keep up. We exercise astonishingly little purview over its inner workings; automation seems to be acquiring a life of its own, one in which man is destined to play an ever-dwindling role. While it gives us the illusion of more freedom, it actually reduces us to following pre-programmed templates. With every additional app, we are placing a filter between ourselves and the world: we no longer experience it except through the mediation of software. The world is only as rich as algorithms will allow it to be. In its increasing ubiquity, software has become “the very stuff out of which man builds his world,” in the words of the MIT computer scientist and father of the early “Eliza” program Joseph Weizenbaum. Human cognition and existence are increasingly measured against the benchmark of automation. We are reconstructing the world according to a frictionless virtuality in which creative processes have little place. Already in the Human Condition, the German-Jewish philosopher Hannah Arendt voiced the concern that we risked becoming subordinated to our machines, of “being ‘adapted’ to their requirements instead of using them as instruments for human needs and wants.” Intellectual automation exemplifies a case where our tools no longer serve as mere extensions of ourselves by enhancing our abilities and freedom. Designed to emancipate and empower us, they end up restricting and stifling us.
The ideal of purely algorithmic processes already lay at the heart of Leibniz’s intellectual endeavor. Leibniz was enthusiastic about machines’ potential to assist mankind. He himself devised a calculating machine that could multiply and divide, and more generally he upheld the benefits that would accrue from the use of computational devices:
And now that we give final praise to the machine we may say that it will be desirable to all who are engaged in computations which, it is well known, are the managers of financial affairs, the administrators of others’ estates, merchants, surveyors, geographers, navigators, astronomers… Also the astronomers surely will not have to continue to exercise the patience which is required for computation… For it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if the machine were used.
Leibniz envisaged machines as useful aids that would help men save time. In fact, much of his thought and work during his lifetime preoccupied itself with the mechanization and automation of human reasoning through, notably, his invention of a universal philosophical language. Unshakable in his faith in the power of logical calculation and inspired by the work of Raymon Lull, John Wilkins and George Dalgarno, Leibniz set out to create his own artificial language, a universal language; but unlike his predecessors, he envisaged his “universal characteristic” as a device for perfecting our reasoning. Already in his teens, Leibniz had conjured up a “combinatorial art” in his 1666 Dissertatio de arte combinatoria. In addition to providing a means of universal communication, this language would serve philosophical purposes and help perfect the human mind. Imperfect human reasoning would give way to formal and symbolic processes. Through the manipulation of signs according to certain pre-determined rules, thought would be reduced to a “species of calculus.” Such “blind thought” would accomplish “without any work of imagination or effort of the mind” and “in all matters of things, what arithmetical and algebraic symbols accomplish in mathematics.”
This philosophical language would be free of the imperfections inherent to natural languages, such as imprecisions, obfuscations and contradictions, and would render truth “stable, visible, and irresistible, so to speak, as on a mechanical basis.” We would not be able to “err even if we wished to.” This new method would help “ground knowledge… upon a new, more secure foundation” by serving “among the most suitable instruments of the human mind, having namely an invincible power of invention, of retention and judgment.” Not only would it help cement what was already known and help solve disputes, this language would also help make new discoveries. Ideally, the deployment of this Ars characteristca would culminate in some type of “universal science.” It would help uncover the world’s rational structure, itself a result of God’s computation, and would render it computable. Knowledge would be mapped and homogenized, just as Google sets out to do today. Both intellectual endeavours would ideally help bring about a project for a general science akin to H.G.Wells’ 1938 “world brain” – a project of a “widespread world intelligence conscious of itself.”
In their ubiquity, software and computation have “reenchanted” the world by seeking to remake it. The sacred has found a new dwelling place in our secularized postmodernity in the shape of a technological-digital teleology. As a result, the temptation to confuse pure computational programing for actual sophisticated talent is very real and wields a powerfully seductive hold over us. Author David Rose envisions a future where “technology infuses ordinary things with a bit of magic to create a more stratifying interaction and evoke an emotional response.” Joseph Weizenbaum himself sought to dispel the magical aura as early as 1966. In his paper “ELIZA—A Computer Program For the Study of Natural Language Communication Between Man and Machine,” he remarked:
For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.
Three hundred years ago in in one of his more prescient comments, Leibniz had already exposed the essential emptiness underscoring computation in his famous “Mill” experiment. In this thought experiment which involved entering a mill, he contrasts a machine’s seemingly perceptive and conscientious overt behavior with the actual void of its internal, purely physical, mechanism: “That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine that perception must be sought for.” Those machines and programs that dazzle us are fundamentally empty. What Leibniz had grasped already back then was that computational devices lack a true understanding of the world. Only “machines of nature,” such as human beings – as opposed to artificial machines, such as clocks – displayed the infinite nestedness that yielded the “true unity which corresponded to what is called I.”
What Leibniz grasped was that computational devices lack a true understanding of the world
While humans would eventually delegate repetitive and mind-numbing tasks to machines, Leibniz never imagined that they would be supplanted by these tools. He envisaged mechanization and automation as starting points, never as ends in themselves: tools were not designed to abolish human effort, perseverance or creativity but to empower man and help him unleash his inner potential. The process itself – rather than the end-result – was what mattered. Man, in his divine image, always remained at the center of his preoccupation. Leibniz’s formalism and broader epistemological project were thus integral to a project of active self-transformation and refinement, not passivity. Our natural state was one of constant distraction; each soul contained and perceived the whole universe, albeit “confusedly.” Only God had a “distinct knowledge of everything, as he was its source.” As Leibniz remarked:
It is not surprising that, in the struggle between flesh and spirit, spirit so often loses, because it fails to make good use of its advantages. This struggle is nothing but the conflict between different endeavours – those that come from confused thoughts and those that come from distinct ones.
According to Leibniz, man had not been born a “blank slate” (pace Locke) but had been endowed by God with the tool of logic. Reason would provide us with the thread through which to “triumph nobly” over this natural state of distraction and “become masters of ourselves.” In his Philosopher’s Confession of the early 1670s, Leibniz explained that “people desire to perform sinful acts because their passions have clouded their intellects.” Fortunately, “in the middle of the shadow, like a gleam of light through a crack, the way of evading it is in our power, so long as we will ourselves to do so.” By actively training our mind, notably through the practice of mathematics, we could hope to gain mastery over ourselves and increase our agency in the world.
What Leibniz had perhaps already discerned was this: ultimately, human intuitive intelligence and pure computation, while yielding similar results, pertained to divergent pathways and were qualitatively different. Even Alan Turing, arguably the father of modern computer science, would in his time emphasize the limits of pure computation and uphold the crucial – if not superior – importance and uniqueness of human intuition. While he conceded that the “ingenuity” of programmers would gradually displace it over time, intuition allowed for “spontaneous judgments which are not the result of conscious trains of reasoning.”
※
Leibniz is generally portrayed as an arch-rationalist, a dry logician, and yet he placed special emphasis on the uniqueness of the human mind. Rules should not cage intellection but constitute its starting point: from them, an infinite number of combinations, variations and nuances could then emanate. His is a world of perpetual and spontaneous unfolding, of unending progress continuously tending towards – but never reaching – a beatific vision and one anathema to the streamlining of reality according to sanitized algorithms. Ideally, finite conscious computation should not constitute the end-all of man’s intellection but blossom into intuitive knowledge. The latter, contrary to what the French philosopher René Descartes had asserted in his Discourse on the Method, represented the apex of knowledge, according to Leibniz.
The key lay in making rational behaviour so deeply ingrained through practice that it became “second nature.” In this manner, the highest freedom accrued not from the lack of rationality, but from its sublimation. Paradoxically, it was in its ability to surpass and go beyond the purely computational that human thought revealed itself to itself in its in divine image.
In a rarely quoted passage on musical ability, Leibniz elaborates on the nature of this unconscious and tacit process. In it, he makes the link between theory and practice, as well as the impression of rules upon the mind and the freedom that derived from it:
I have already explained that there are things which depend rather on the play of imagination and on spontaneous impressions than on reason and that in such things we need to form a habit, as in bodily exercises and even in some mental exercises. That is where we need practice in order to succeed.
… We see excellent geniuses succeeding in their first attempt within the profession they apply themselves to, and by virtue of their natural judgment put old practitioners to shame. But that is not a usual occurrence, and this is how we must regard it.
… in order to obtain good decisions quickly in an embarrassing situation we have to have an extraordinary power of genius or long enough practice to make us think automatically and habitually what otherwise would require investigation by reason.
In this manner and after intense training, man could hope to develop a sliver of God’s synoptic grasp within a particular realm. Actually, in his letter to Hansch on Platonic Philosophy, Leibniz consecrated pure intellection as the highest form of cognition – above demonstration itself. It had the capacity to “[look] into the connections of truth by a single act of the mind; this belongs to God in all things but is given to us in simple matters only. But the more we grasp in a shorter time, the closer we approach in our demonstrations to this pure intellection.”
Ultimately, Leibniz espoused an optimistic anthropology. He upheld the essential similarity of divine and human minds. He likened rational souls to “little divinities” that mirrored the cosmos from their particular viewpoints; as he famously stated, the human mind differed “from that which is in God only as a drop of water differs from the ocean, or rather as the finite from the infinite.” Man ideally would emulate God in his own deployment of intuitive, infinitely embodied rationality, and in this manner draw closer to him by “produc[ing] something resembling his works, though in miniature.” Indeed, human minds, particularly artists’ minds, were not only “living mirrors or images of the world of beings, but images of the Deity itself, each mind being a kind of deity in its own department.”
Indeed, the perfection of the human mind lay in the “infinity, a footprint, indeed an image of the omniscience and omnipotence of God,” which it encapsulated. As he noted already back then, perfection did not consist in “addition” or “composition” but was characterized “by the negation of limits.” Similarly, in his mathematical work, he already labored against the misconception that the infinite simply consists in the sum of finite units. Leibniz was intent on preserving the incommensurability – and underlying Godliness – of the infinite. In the same way, the human mind could not be reduced to finitude; it contained “a kind of intelligible world within itself.”
Therefore, while current intellectual automation and Leibniz’s project both purport to improve the lot of mankind and may strike us as similar at first glance, they serve divergent aims and hark back to contrasting conceptions of man and his place in the world. Paradoxically enough, while Leibniz sought to empower man, intellectual automation predominantly seeks to divest and infantilise him. It invariably projects man as defective and seeks to usurp his place in the world bit by bit. By measuring ourselves against its standards, instead of our own, we have exacerbated our “[shame] to have been born instead of made,” in the words of the twentieth-century German thinker Günther Anders.
Although imperfect and limited, man remained for Leibniz God’s image and existed as a member of the “Republic of Spirits.” The thinker thus maintained his faith in man’s ability not to succumb to despair before an apparently meaningless universe. Indeed, God could never be displaced from a world in which man was born to marvel at and celebrate the beauty of creation, of a world irreducible to the brute logic of pure computation. Great thinkers are often prophets, and while Leibniz wrote 300 years ago, he still has something to teach us.
Audrey Borowski is a doctoral student at the University of Oxford. She is also the founder and convener of the TORCH research network “Crisis, extremes and Apocalypse” at the University of Oxford.