099-1.jpg

current problems

B. G. KUZNETSOV

__TITLE__ Philosophy
of Optimism
__TEXTFILE_BORN__ 2009-06-04T10:18:52-0700 __TRANSMARKUP__ "Y. Sverdlov"

PROGRESS PUBLISHERS MOSCOW

Translated from the Russian by Ye. D. Khakina and V. L. Sulima

CONTENTS

B. F. KysneijoB

(WUIOCOOHfl OHTHMHSMA Ha

Part One

EPISTEMOLOGICAL OPTIMISM

Cognoscemus!................

7

Optimism, Being, Motion............

27

Initial Conditions..............

37

The ``Is'' and the "Ought to Be".........

42

Optimism and Immortality...........

53

Labour and Freedom.............

60

The Problem of Old Age............

66

Part Two

SCIENCE IN THE YEAR 2000

Why the Year 2000?.............

74

The Age of Einstein.............

94

The Atom.................

114

Quantum Electronics.............

130

Molecular Biology..............

142

Cybernetics.................

158

``Know-How" and "Know Where"........

178

De Rerum Natura..............

199

High-Energy Physics.............

231

Space...................

240

Post-Atomic Civilisation............

248

Part Three

ECONOMIC CONCEPTION OF OPTIMISM

Integral Goals of Science............

269

Science and Economic Dynamics .........

284

Inter-Branch Information............

300

Forecasts of Understanding and Forecasts of Reason

309

Econometry of Optimism............

323

First printing 1977 © Translation into English. Progress Publishers 1977

Printed in the Union of Soviet Socialist Republics

K

10502---639

143-77

014(01)-77

PART ONE EPISTEMOLOGICAL OPTIMISM

COGNOSCEMUS!

Old Russian legends often feature a knight who pauses before a sign at a crossroads, reading "If you go right.. . If you go left. . .", with one side of the inscription threatening misfortune and the other promising success. Today mankind is at a similar crossroads. One "side of the sign" portends atomic war and destruction of civilisation. This "side of the sign" involves a lot of documents, research papers, and novels (among them a novel by Nevil Shute, written in 1957, about the destruction of mankind as a result of an atomic war in 1962-63). The other ``side'', the optimistic one, predicts an unprecedented flourishing of culture and welfare, a substantial lengthening of life, stamping out of diseases, an unparalleled rise of intellectual and moral standards. It should be noted that here, too, the laconic legend is substituted for an incredible mass of information. Optimistic prognostication includes a host of genres, philosophical and sociological generalisations alternating with economic curves, technological diagrams, physical formulas.... But there is one peculiarity that distinguishes modern optimistic prognosis not only from the legendary portent, but from former numberless attempts to predict the future of mankind. Modern prognosis in its optimistic and pessimistic aspects is formulated in the way which is called open condition in English grammar. It is not unequivocal, there is no fatalism about it: the events predicted will happen if mankind will right now start making provisions for them by taking the necessary measures to create initial conditions for certain developments. Forecasting becomes the starting point of planning

«

PHILOSOPHY OF OPTIMISM

intended to realise initial conditions ensuring an optimal prognosis, an optimal course of further development. For this reason, the modern epoch is different from the past particularly by the fact that Man is thinking more about the future and that Man's thinking has become unprecedentedly prognostic. Now, as never before, have ideas about the future and the integral destiny of the world become a necessary element in man's purposive activity. The concept of planning is merging with that of ideal and accordingly the concept of optimism, which is a synthesis of the ``is'' and the "ought to be", their correlation coefficient, passes from emotion to will, tending not only to will but to knowledge, to the establishment of the ``is'' and discovery in this ``is'' of that which leads to the "ought to be''.

Optimism, however, by no means loses its emotional nature, the sensation of optimistic joy never disappearing. Quite the reverse, the emotional content of optimism becomes deeper, richer and more complex. It involves not only expectation of better things to come, but conviction of their causality. The establishment of such causality, or, in other words, the development of science, becomes inseparable from emotional uplift.

Science meets emotions halfway. The very science that called into being atomic energy, quantum electronics, molecular biology, is becoming incomparably more emotional than it was in the past. Non-classical science has, in fact, brought mankind to the crossroads: "If you go right ... If you go left ... ". Non-classical science---the theory of relativity, quantum mechanics and all that they gave rise to---clearly and distinctly demonstrates the unity of scientific, moral and esthetic ideals. The value of cognition, i.e., its impact on man's life, its economic, social, cultural, moral, esthetic effect, is becoming a necessary element of the development of knowledge. In this sense, the atomic age witnesses a fusion of the criteria of truth and value. A synthetic world-outlook, wherein truth is inseparable from moral and esthetic ideals, is becoming a necessary precondition of scientific, social and economic progress.

The philosophy of optimism begins with optimism attributed to philosophy itself, with the assertion of its im-

PART ONE. EPISTEMOLOGICAL OPTIMISM

9

mortality, revealing those invariants whose existence is the other side of preservation, in this case, the preservation of a fundamentally general flow of cognition, a single picture of the evolutionary world and Man in it, with the assertion of the value of such a picture.

The very possibility of a historically developing philosophy has been debated since ancient times. "The existence theorem" of the history of philosophy was not simple. Philosophy must be truth, but truth is single and if it is to remain such, it cannot change. Thus the concepts of philosophy and history seemed to exclude each other. Indeed, within the framework of dogmatic philosophy each school saw a certain evolution behind it, a history of errors through which truth was making its way to be finally attained within the given system, leaving to the future only details to be elaborated and arguments to be accumulated. Kant's philosophy remained dogmatic in that it denied meaning to those questions that were answered by the content of the historically developing concepts of the world. Hegel's philosophy included history, but excluded prediction. Self-knowledge of the absolute spirit concludes the radical transformation of cognition and its object. True dialectical philosophy must regard knowledge as a historically developing reflection of the infinitely developing being, incorporate not only its own history, but also its further evolution, infinite in principle, its prospects, its ``futurology''.

Does that mean that philosophy has no invariant, immutable content? No, it does not. The very concept of change loses its meaning without the statement of the historically invariant subject of change that is identical to itself. What are the invariants of philosophy?

First and foremost, these are conflicts of being, knowledge and value. The conflicts of being are: the inseparability of discreteness and continuity, the inseparability of the local ``here-now-being'' and the infinite "beyond-- herenow-being". The conflict of knowledge is the inseparability of the empirical and rational cognition of the world. The conflict of value is an axiological conflict of the ``is'' and the "ought to be", to be discussed further on. Philosophy and science solve these conflicts, at the same time preserv-

10

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

11

ing them to address them to the future, seeing in the invariance of these conflicts their immortality. But in respect of philosophy optimism does not limit itself to such a conviction. Optimism discovers the sources of philosophical conflicts in the objective conflicts of the world, and the means of solving these philosophical problems, in the knowledge of the objective world which is inseparable from its transformation. In the transformation of the world lies the value of philosophy. Conviction in the immortality and value of philosophy is based on the certainty of the immortality and value of the world, the possibility of its cognition and transformation, the possibility of Man's ideals being objectified as a result of cognition and transformation of the world. Thus, optimism directed at philosophy follows from optimism directed at its object, the world. The value of knowledge follows from the knowledge of value. The underlying idea of modern optimism linked with modern science, with the prediction of its economic, ecological, social, moral and esthetic effect, is the existence of a certain ordering, an objective ratio, an objective value in the world itself. It is the knowledge of this rational structure of the world to be comprehended by reason, the knowledge of its objective value, that is the foundation of the value of knowledge, the source of the effectiveness of science, of the transformation of the world, of the realisation of man's ideals.

Epistemological optimism following from the assertion of an ordering of the world and its comprehensibility, has become a condition of and a factor in the acceleration of scientific and technical progress. This role is now played by dynamic epistemological optimism---the idea of infinite knowledge, not restricted by any absolute limits of transformation, clarification and particularisation of true concepts of the world, and the transformation of the fundamental concepts themselves.

Already classical science and classical philosophy supplied a clear and profound motivation for epistemological optimism. Hegel put forth decisive theoretical arguments against agnosticism, and 18th and 19th century science, in fact, applied epistemological principles which excluded agnostic pessimism. By the end of the 19th century there

existed a dialectical philosophy of science, which generalised not only its results, but also its dynamics, motion and transformations. However, in classical science, the transformation of fundamental principles was only sporadic; it was rarely repeated within the life-span of a single generation, and conclusions concerning the unlimited and entirely unrestricted development of science could be drawn only at a very high level of abstraction.

Non-classical science follows a different course of development. In it a revision of fundamental principles becomes not only a permanent background, but a condition, and a component of the continuous progress in comprehending the world and transforming the entire civilisation on the basis of the new concepts.

In non-classical science, each big concrete step in fundamental research proves that scientific progress is unlimited, that knowledge is unbounded. The absence of such limits ceased to be an abstract problem removed from the actual content and tempo of scientific progress. Infinite progress became, in Hegel's philosophy, true infinity, reflected in each finite link, thus acquiring a local meaning. It is not a question of whether scientific progress will end in a billion years, but of something quite different: whether science has any problems principally insoluble or, on the other hand, there exist problems which are connected with the modern development of science, with its actual tendencies, its local steps and local episodes. An analogy with the local demonstration of the limitlessness and infinite world space may not be inappropriate here. In its time the problem of the infinity and finity of the world did not allow a solution on the basis of local statements concerning a given point of space. However, in 1854 Riemann perceived that the problem could be solved locally: if space surrounding us is curved positively, it is finite; if the curve is zero or negative, space is infinite. This analogy does not lay claim to anything more than a simple explanation of the modern situation, in which the style of science, the permanent and essentially continuous change of its ideals makes the presumption of infinite progress and epistemological optimism a direct outcome of the development of science and often a condition and stimulus

12

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

13

of such development. Non-classical science neither created nor discovered epistemological optimism, but formulated it explicitly and clearly. Moreover, non-classical science transformed epistemological optimism into a component of scientific and technical, economic and social optimism.

Let us consider the relatively partial and narrow form of epistemological pessimism propounded in 1872 by Du Bois-Reymond in his speech "On the Limits of the Knowledge of Nature". In this speech Du Bois-Reymond uttered his pessimistic formula: "Ignoramus!" (We do not know), and further, "Ignorabimus!" (We will never know), we will never know the nature of things, rerum natura. "What is the nature of the atom? What is the nature of perception?" asks Du Bois-Reymond. There is but one answer to these questions: we do not know and we will never know. This conception follows from a certain notion of the nature of knowledge. Du Bois-Reymond identifies the progress of science with the establishment of the mechanical basis of phenomena: positions, velocities, and accelerations of bodies, i.e., their behaviour, and forces which, in their turn, are determined by the positions and velocities of the bodies. Thus, the field of research is limited to the two problems formulated by Newton in his Philosophiae naturalis principia mathematica (Mathematical Principles of Natural Philosophy)---determination of the position of bodies by forces and determination of forces by the position of bodies. The subsequent development of science added to these problems determination of forces by velocity, as forces proved to be dependent on the motion of bodies. Such a definition yields a mechanistic picture of the world where there is nothing but moving bodies and their interaction, forces. But motion and interaction of particles cannot explain what is a particle.

Classical science and classical philosophy (in so far as it generalised the dynamics of science) found a way out of this pessimistic cul-de-sac. Already in the 19th century the goals of cognition were no longer reduced to the detection of motion and interaction of bodies as the ultimate explanation of rerum natura. Non-classical science went further. Not only did it revise, restrict and generally reject the notion of cognition as reduction of processes of nature

to the behaviour of simple elements, but ascribed to these elements an infinitely complex nature. The particle possesses a contradictory corpuscular and wave nature. This basic statement of quantum mechanics is only the beginning of further transition to an ever more complex notion of the world. Characteristic of non-classical science of today are attempts to construct elementary particles out of particles of considerably greater mass or present the very existence of particles as a result of their interaction. Whatever the destiny of concrete hypotheses may be, the general tendency of non-classical physics is to present the particle as an infinitely complex reflection of the infinitely complex world.

This tendency transcends the picture of the world in which the existence of a particle is reduced to its behaviour. It can be shown that herein lies the conflict of classical science; raising it to an absolute brought Du BoisReymond to his "/gnorabimus".

The train of Du Bois-Reymond's thinking is this. He examines the mechanistic explanation of the world in its ideal form---Laplace's supreme reason that knows the position and velocities of all particles in the Universe and can predict all of its future up to the day when the cross crowns Agia-Sophia. But, asks Du Bois-Reymond, will it bring science closer to answering the question: "What is a particle?''

That is the age-old problem of the classical mechanistic conception of the world. Descartes expressed this conception in almost perfect form, thereby approaching its limits. He singled out the body from the environment, ascribing to it only behaviour, only motion: it is this behaviour, this motion, that determines the body's existence, individualises and distinguishes it from surrounding space. But what is it that moves, what is the subject of motion, or, if we are to consider the problem in its atomistic aspect, what is an atom? Descartes, Newton and all classical science relegated such questions from the sphere of natural science, from physics to metaphysics. It was Spinoza who denied the existence of any reality other than spatial, any world other than a physical world, subject, in principle, to scientific explanation. But science in the proper sense of

14

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

15

the word, that is, science based on experiment and mathematical analysis, did not consider the problem of subject in statements like a body is moving, and however closely the picture of the world might approach Laplace's ideal, the problem of subject could not be solved, and answers to questions like "What is a particle?", "How does it differ from the space it occupies?", "How is a body to be distinguished from position?", could not be found within the mechanistic conception of the world.

Of a similar kind is the problem of the reasons for this or that behaviour of a body. Classical field theory demonstrated how forces change, how they are affected by spatial arrangement of sources, by their motion, by changes in other forces (effect of changes of electrical forces and vice versa), yet, it did not come closer to answering the question, what is force. Neither was the ether theory an answer to this question; it dealt with the motion of ether, reducing gravitational and electrical forces to ethereal impulses, but the direct impulse explained the substance of force just as little, as did action at a distance.

A second reason for "Ignorabimus" is the impassibility to obtain a definition of consciousness or even the primary psychic act---sensation, however detailed the knowledge might be about the motion of atoms making up the brain. Suppose a picture of molecular motions in the brain is unfolding itself before our eyes. Are we any closer to the definition of sensation or consciousness as distinct from motion of matter? We are not crossing this threshold, and, so far as one can see, we shall never cross it. Here, as in the case of atoms and forces, no amount of detailed information about atomic motion involved in psychical acts will reveal the substantial distinction between sensation and motion. In the opinion of Du Bois-Reymond here again we deal not only with "Ignoramus!" but with " Ignorabimus!" as well.

Most critics of Du Bois-Reymond rejected the limits of knowledge on the plea that describing the behaviour of atoms is, as a matter of fact, explaining their nature, and the description of the behaviour of brain atoms presents an ideal explanation of the physical life, beginning with sensations. This was criticism from a mechanistic stand-

point, as manifestations of a thing in its behaviour were taken to be its essence.

The proposition that rerum natura, substance, the essence of things can be reduced to movements of atoms was incompatible with epistemology which had developed from generalisations based upon evolution of science, its progress and transformation. The greatest 19th-century discoveries in natural science demonstrated that Laplace's supreme reason far from cognising the existence of atoms as distinct from space, would not have been able to cognise motion itself, as the latter embodies the contradiction between reducibility and non-reducibility of higher forms of motion to elementary movements of atoms. Even if Laplace's supreme reason had known the positions and velocities of all particles of the Universe, as Du BoisReymond puts it, it would not have been able to understand the essence of the atom or the essence of thinking. Nor, by reducing the picture of the world to movements of atoms would it have been able to explain a number of laws of the world that are most obviously within the limits of knowledge. Laplace's supreme reason knowing the positions and velocities of atoms, would not have been able to understand the political and military reasons that led to such events as the replacement of the crescent by the cross on the Hagia Sophia, had it solved all the differential equations describing the motion of particles making up the crescent, the cross and the organisms of men involved in that replacement. But that is not all. Laplace's supreme reason with its mechanics of atoms would not have been able to comprehend the essence of entropy, the essence of irreversible thermodynamic processes or the simple fact that heat is not transferred from a cold body to a hot one. There is no entropy in atomic mechanics; it appears only in statistic ensembles of molecules to demonstrate the non-reducibility of more complex forms of motion, such as heat, to a simpler form, mechanics of the atom. Dialectics as theory of knowledge changes the very definition of cognition. Knowledge is unlimited not because it is capable of embracing at any given moment the whole of the infinite complexity of substance, reducing it to the motion of atoms. Knowledge is unlimited and infinite not in the sense of actual infinity,

16

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

17

but in a different sense: it is potentially infinite and the absence of absolute limits stands for this kind of infinity. Infinite knowledge reveals the inseparability of forms of motion from more elementary ones and likewise their nonreducibility to more elementary forms. In this sense, "the limits of knowledge" turned out to be limits of specific laws that are replaced by other laws characteristic of other forms of motion. These specific limits proved to be transitions from comparatively simple laws to more complex ones, from elementary processes to ever more concrete and contradictory ones.

Nineteenth century science interpreted elementary processes as motion of atoms subject to Newtonian laws. At the beginning of the 20th century, however, science faced an unexpectedly new limit, which also proved to be relative and transitory, but was no longer a specific but a general limit of the type of cognition that reduced the picture of the world to atomic movements subject to Newtonian mechanics. This general limit revealed itself in facts which contradicted Newtonian mechanics, the postulate of the independence of mass from motion and field continuity. In classical science, complex processes were not reduced to elementary ones, but the very existence of elementary processes was never doubted. Neither were the classical laws governing these elementary processes. Now these laws were found to be mere approximations. Specific limits were the basis for the classification of science, whereas general limits were the basis for its periodisation. The classical era in science was over. A new epoch began. Not only did Newtonian mechanics---the theory of elementary processes in the classical picture of the world---prove to be inexact, but the very concept of elementary processes became rather relative. Processes and objects that are supposedly elementary in nature according to the modern view of the world, turn out to be the most complex and mediate, and require for their explanation references to the infinitely complex structure of the world as a whole.

Geoffrey F. Chew identified one of the characteristic features of non-classical science as "the crisis of the elementary concept". He had a concrete concept in view, the

existence of a certain type of particles dependent on their interaction; but "the crisis of elementariness" and denial of reducibility to elementary, simple processes and objects, essentially characterises non-classical science as a whole, whatever the destiny of concrete physical concepts.

In fact, an ``elementary'' particle appears to be the most complex physical object, the ever growing complexity of physical conceptions being a most general epistemological feature. The modern concept of knowledge is distinguishable from the concept of cognition resulting in absolute "Ignorabimus" or absolute comprehension of the world (but absolute and complete in both cases). The former concept is as different from the latter ultimate picture as modern dynamics differs from Aristotelian cosmology in which the "natural movement" of bodies ended in the "natural places" of these bodies.

The dynamic optimism of cognition, the concept of its fundamentally infinite nature, the paramount significance of differential criteria---the speed and acceleration of science---which replace the illusion of the realised and completed epistemological ideal, are, as we shall attempt to demonstrate, the source of the dynamic effect of contemporary non-classical science on civilisation, and the basis of dynamic scientific, technological and economic optimism.

Non-classical science rejects not only restricted and ultimate conceptions but norms and methods of cognition given once and for all. Non-classical science questioned not only fundamental physical principles, but geometrical and logical principles as well. The theory of relativity lent physical content to non-Euclidean geometry. Quantum mechanics provided physical interpretations for logical norms, thereby making them mobile and dependent on the experiment. Knowledge liberated itself from all absolute limits. It liberated itself from ontological absolutes---absolute space, absolute time and elementary "bricks of the Universe", autonomous in their existence. It likewise liberated itself from epistemological limitations--- the fiction of complete knowledge and absolute mathematical and logical norms claiming an a priori character. But is infinite cognition an optimistic prognosis?

2---01545

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

19

In his History of Scientific Literature in New Languages Leonardo Olschki says: "For those used to look into the root of things, Galileo unravelled an insoluble world mystery and a science infinitely stretching in time and space, whose infinity was to arouse a feeling and awareness of man's solitude and helplessness."* Such a note--- a perception of solitude and helplessness---sounded in the complex emotional effect of the 17th century scientific revolution (just as in the still more complex emotional effect of modern non-classical science). This note could be heard, for instance, in Pascal. However, the total emotional effect of scientific transformations liberating knowledge from limits and absolutes, is different, and v largely opposite, it is optimistic.

In this book the epithet ``optimistic'' does not have an emotional meaning, at any rate not entirely emotional. It is associated with the goals of science, denoting a correlation between the forecast and the goal that science sets itself at the given moment. The goals of science will be dealt with in another essay. Now we would like to make these points. The goal of classical science was often taken to be the ideal of complete knowledge. Indeed, the unattainability of this ideal considered as the absolute limit of science was the reason for the pessimistic `` Ignorabimus'' complementing the local statement of "Ignoramus". Now, the goals of science are based on its differential criteria. Science does not seek to achieve ultimate truth, but the most rapid and most efficient progress towards truth. We shall see later that this feature of non-classical science determines its economic effect. But let us not digress from the problem of epistemological optimism opposed both to "Ignoramus" and "Ignorabimus". As far as ``Ignoramus'" is concerned, the statement of unresolved problems now is inseparable from the positive statement of "Cognoscimus"---"we know". This "Cognoscimus" has a differential meaning not only in terms of information obtained, but also in terms of new theoretical and experimental methods of obtaining further information, its

speedier acquisition, and the emergence of new problems and stimuli for scientific creativity. "Ignoramus" is now inseparable not only from "Cognoscimus", but likewise from "Cognoscemus"---"we will know''.

From the contemporary standpoint, the classical illusion of complete knowledge seems a pessimistic conception, a negative statement: explanation cannot go further, without losing its meaning. Here we encounter an extremely curious ``castling'' of the concepts of pessimism and optimism.

Its essence is as follows.

For dogmatic explanation (in more general terms, for the understanding of science) the source of optimism is the achievement of an ultimate explanation or the hope of such an achievement. For dynamic explanation (for the reason of science) the prospect of an ultimate solution putting an end to all questions of ``why'' and ``wherefore'', terminating the inquisitive, restless line of scientific development, will be a pessimistic one, a pessimistic prognosis. Contrariwise, restlessness, incompletion, and the prospect of an infinite series of new questions are a source of optimism.

Why are the terms ``understanding'' and ``reason'' of science appropriate here? The traditional distinction between understanding and reason ascribes to understanding the knowledge of the finite, to reason, that of the infinite. The actual progress of science is impossible unless there exists a synthesis of laws of understanding explaining a given phenomenon, and reason's presumption of further and potentially infinite knowledge of the world. Although in the non-classical epoch the inquisitive `` reason'' accompanying the ``understanding'', accompanying the soothing, positive melody of scientific progress is becoming loud and clear, it fails to drown out the positive melody, becoming, as it does, one with it. Today every partial answer is at the same time a question addressed to the entire chain of scientific explanations. We will cite here Einstein's answer to the question posed by the Michelson experiment. His answer involved very general principles, the nature of space and time, that which seemed initial and not subject to further analysis, that which

2*

* L. Olschki, Geschichte der neusprachlichen wissenschaftlichen Litcratur, Bd. 3, Halle (Saale), 1927 pp. 118-19.

20

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

21

Kant considered to be a priori. But Einstein's conception denied an a priori nature of space and time; it even denied the a priori status of geometric axioms. These axioms themselves are mediated, and the physical explanation of the world's geometry disappears in the distance, in the infinite series of increasingly new physical statements of fact. In the theory of relativity, the infinity of cognition is present in each concrete, local finite link, in the explanation of experimental results. We are unwittingly brought back to Riemann's idea of the local reflection of infinite space.

The fact of this reflection creates in science a continuous line of questions directed at the future, unresolved conflicts, prognoses and expectations, already discussed above. This line is ever more apparent in contemporary, non-classical science. In its very essence, it is closely related to the emotional component of science and, in particular, to optimism in its psychological aspect, to a certain set of moods and feelings.

This optimism, related to the inquisitive and infinityseeking component of scientific creativity, is in no way rectilinear and monochromatic; it is tinged with regret for the classical values being destroyed. This regret, however, is not tragic (as it was with Lorentz who wished he had died before classical physics was wrecked), but rather lyrical and resigned. This optimism also includes satisfaction engendered by the indestructibility of the classical values.

To show with greater clarity and concreteness the connection between modern optimism and the dynamics of science, the inevitable transformation of the most fundamental conceptions, we will deal in greater detail with the history of the most general and stable absolutes of classical science. These embrace absolute space existing independently of the bodies immersed in it and having invariable geometrical properties defying further analysis; absolute time flowing irrespective of physical processes; the invariable "bricks of the Universe" devoid of inner structure; and finally, universal laws of being applicable to all domains, to all series of phenomena.

All these absolutes acted as limits of knowledge. Absolute space and absolute time explain the flow of physical

processes, though they themselves do not depend on anything, thus breaking off the essentially continuous causal analysis.

Derivation of extension from extensionless substance (Leibniz) or the Kantian conception of space and time as a priori subjective forms of knowledge, also break the chain of physical causes proper; in fact here again a metaphysical wall is built blocking scientific, causal and physical knowledge.

The next absolute are the "bricks of the Universe". Classical atomistics either ascribed to atoms absolute homogeneity, or operating with complex and qualitatively different particles, saw the ideal of scientific explanation in homogeneous, simple, genuinely elementary atoms. Classical atomistics never lost hope to attain this ideal which was, in classical times, an optimistic prospect, predicting absolute, complete knowledge.

It is natural that the unextended, point-like atom of Boscovich or Wolff should have no inner structure. Neither was it to be found in the extended, ``final'', non-- quality, homogeneous atom which completed or sought to complete the scientific, causal spatial and temporal analysis. Such an analysis, passing from the larger links in the hierarchy of discrete parts of matter to smaller ones, referred, in explaining the properties of the system, to its inner structure, to the existence of smaller systems; the qualitative properties of the particle were explained by the location and motion of sub-particles. The last link of the hierarchy could be a non-quality particle consisting of homogeneous substance. The spatial and temporal causal analysis thus came to an end.

The eternal laws of being were the end, the limit, the exhaustion of such an analysis. Cognition, in principle, included in its schemes an infinite number of phenomena subject to these laws, but the laws themselves remained independent of this content. And when attempts were made to derive them from something else, to make them mediate and secondary with regard to more general and fundamental principles, classical thought went into metaphysical regions, deserting the field of causal investigation. Whatever the explanation of eternal laws within the

22

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

23

framework of classical philosophy, whether they appealed to convention or providence, scientific analysis ended right here. When classical science contrasted itself to metaphysics, eternal laws were viewed as a result of induction, as a conclusion from observations; such, at least, was Newton's position. But this precisely means that the question of Why? is divorced from eternal laws.

The absolutes of classical science---the invariance of absolute space and absolute time, the homogeneity and invariance of atoms, the homogeneity, universality and invariance of the eternal laws of existence that seemed to be embodied in Newtonian axioms of motion were the basis of the ``Victorian'' static optimism, the peaceful and joyful , belief that if science has not yet come into a haven of knowledge accomplished in its essence, it will soon do so. This "feeling of haven" is just as characteristic of `` Victorian'' optimism as the sensation of departure from the haven to enter the boundless open sea of science is characteristic of contemporary epistemological optimism.

With regard to optimism permeated with the "feeling of haven" the term ``Victorian'' is more appropriate than the term ``classical''. Classical science and the peculiarities of 19th century culture and social psychology that depended on its contents and style were not at all unitary. The ``inquisitive'' and dynamic tradition projected into the future---the essential component of the development of science---was never interrupted. But this trend, to draw a remote analogy, did not occupy the front benches in the parliament of science, and rather was in opposition, which manifested itself in the assertion of contradictions, antinomies, and the logical incompleteness of contemporary science. Such assertions were, in particular, the basis of the relativist critique of Newtonian mechanics in the 19th century. They were sometimes called ``catastrophes'' (e.g., the "ultra-violet catastrophe" from which physics was saved by the idea of the quantum emission of radiation). The 20th century put an end to the so-called ``Victorian'' illusions of constant prosperity prevalent during the long years of Queen Victoria's reign. ``Victorian'' optimism in science was based not so much on the absence as on the ignorance of contradictions and inconsistencies in the classical absolutes.

The basic fact is that in the classical period the critique of absolutes involved a very high level of abstraction. When one meditated on the infinity of the Universe (such meditations, as Riemann rightly noted, had but little bearing on the main problems of experimental investigation of nature), one ran into paradoxes of infinite forces acting on each body in the gravitational field of the infinite Universe, of the night sky filled with an infinite multitude of stars.

The genesis of non-classical science is based on a different situation: paradoxes arose out of experiments, science could not develop and later could not find application without explicit formulation of paradoxes, without what Einstein called flight from paradoxes, i.e., without transferring the aura of paradoxicalness from experimental results to the general axioms of science, or re-evaluating these axioms. The infinite variability of the axioms, the fundamental infinity of scientific progress became a characteristic feature of each major local episode in the development of science. We will again recur to the analogy with Riemann's problem of the infinity and finity of space which is solved by local definitions and local experiments.

What is the destiny of classical absolutes in modern science?

For absolute space and absolute time, the most dramatic moment was the identification of the spatial and temporal curve with the gravitational field---the emergence of the general theory of relativity. The geometric properties of space, the axioms of world geometry no longer give reason for attributing to them an a priori or conventional character. They acquire physcial meaning and become empirically knowable subject to experimental testing, i.e., " external confirmation''.

The absolute elementariness of the "bricks of the Universe" also became problematic and relative. Nowadays it figures in physics as just another name for transition to a higher level of complexity. The elementary particles of modern physics possess both wave and corpuscular properties, which gives rise to a number of conflicts extremely paradoxical from the classical point of view, and primarily, the indeterminacy of either the position or the im-

24

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

25

pulse of the particle, depending on the conditions, the macrocosm and choice of macroscopic instrument. The particle's complexity is not reduced to classical complexity, i.e., to the presence of an inner structure, the presence of subparticles. Its complexity, on the contrary, is representative of the complexity of the Universe, its structure being one of external interactions into the still more paradoxical interaction of the particle with itself. Above, mention has already been made of conceptions characteristic of modern physics, explaining the particle's existence in terms of its interaction with other particles of the Universe. The elementary particle is not the concluding link of the causal analysis but transition to a more complex one dealing with paradoxical statements and transforming them into natural conclusions from the paradoxical general conception of the world. Thus was realised Lenin's idea of the inexhaustibility of the electron, expressed at the beginning of the century.* Likewise was realised his idea of the transition from the electromagnetic picture of the world to an infinitely more complex picture/^^1^^"*

Non-classical science relativated and restricted yet another reason for the ``Victorian'' "feeling of haven"---the idea of invariable and eternal laws. This feeling was nurtured by the idea of a homogeneous world, of identical laws governing the microcosm and the macrocosm. The infinite Universe that once frightened Kepler (he wrote about his fright to Galileo) and Pascal (this will be discussed further on), becomes not so frightening when man learns that the infinite spaces and the ultra-microscopic regions are governed by the same customary laws and forms of being.

There has always existed a notion of infinite divisibility of matter and infinite transition to structures of an ever greater scope. But this infinite hierarchy consists of homogeneous structures. "These electrons may be worlds where there are the same five continents...", wrote Valery Bryusov. On the other hand, our galaxy may seem microscopic to a researcher in a megaworld, watching us through a microscope whose size is measured in billions

* V. I. Lenin, Collected Works, Vol. 14, p. 262. ** Ibid., p. 280.

upon billions of light years. This rather dull picture conveys the idea of homogeneous laws of being. The existence of such laws is the natural limit of cognition. Non-- classical science does not know such laws. It deals with specific laws of the macrocosm and microcosm. Their specificity was known already in the 19th century. The daws of thermodynamics are macroscopic laws, they do not apply in the microcosm, and the mechanics of molecules is ignored in describing macroscopic fluxes of heat. In non-classical science it is all the more complicated; transitions to macroscopic laws are not based on mere ignoring statistics, but on rather paradoxical statements. The cardinal problem of our time is that of the relation between the specific laws of the ultra-microscopic world (they may be laws of annihilation and creation of elementary particles), the laws of the macrocosm and megaworld (the specific character of the latter may be illustrated by the mechanism of gravitational collapse).

There exists yet another form of the limitation of cognition and of epistemological pessimism. It does not hide under a statistically optimistic illusion of complete knowledge, but involves the boundary between subjective perceptions and the objective world. Does man's cognition break through this boundary, does it attain objective truth? What is postulated here is not complete, exhaustive knowledge of substance, or even an incipient authentic knowledge of substance. This is the most difficult form of agnosticism to overcome, the most fundamental and the most tormenting for human thought. It is directed against the basic presumption of knowledge---against the credible existence of a knowable world. It tortured Descartes until the Ulm inspiration of 1619 opened before him the way to knowledge that he thought credible. What can guarantee the reliability of sensory impressions, the credibility of that which registens in the consciousness through the sense organs? Is that not a dream? Does that which we see and feel by touch exist? Are not perhaps our ideas of the objective causes for our sensations illusory?

Hardly can anyone be found who would really doubt the existence of the external world. Solipsism is only an extreme (indeed the only consistent) form of agnosticism.

26

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

27

It is not the existence of the external world that is usually questioned, but the possibility of credible proof of its existence. Descartes found such proof, leaving, as it seemed to him, the ground of sensualism to take, as criterion of credible proof, doubt itself, consciousness itself (Cogito, ergo sum), searching in the external world that which in its clarity equals this cogito. He found it in pure extension and by liberating bodies from all predicates but extension. Descartes pictured a world which he recognised as credible.

And yet cognition searches for credibility of sensual impressions, for in their absence it cannot guarantee reliable conclusions. And here human thought faces the shadow of a more general and terrible "Ignorabimus" than that threatened by Du Bois-Reymond.

But this shadow, too, is only a phantom. The history of philosophy, science and technology provides decisive arguments in favour of credibility of objective existence. It is just these arguments that are contained in the history and the conclusions drawn by the genesis and transformation of the concepts of the world and methods of its change.

Agnosticism in relation to the existence of the external world results from the concept of cognition as a number of self-generating images and statements of fact, consciousness playing a passive role. And it is precisely for this reason that consciousness cannot penetrate the impermeable membrane of sensations, and can say nothing of whether or not anything exists on the other side of this membrane. But the actual facts are quite different. Consciousness functions actively on the basis of impressions, it arrives at conclusions not contained in these impressions. Consciousness is then objectified, Man acts on Nature and tests his conclusions that are not directly rooted in empirical facts. He does this in experiments and in industry. The coincidence of observable phenomena with theoretical calculations imparts a credible character to these phenomena. Man himself gets involved in this causal chain, he no longer doubts the subordination of the phenomena to certain causes. He discovers these causes, as the observable result was predetermined by the arrangement of

material processes realised in an experiment or in industry.

The epistemological effect of science is demonstrated even more distinctly when experimental results do not coincide with those derived from theory, i.e., in the case of paradoxical results, so characteristic of the genesis and development of non-classical science. This situation abrogates both epistemological empiricism (together with the sensual agnosticism deriving therefrom) and epistemological apriorism. The paradoxical result was by no means invented by us. It destroys that which was invented. Neither is it imposed on us empirically, since the genesis and development of non-classical theory consists in constructing new conceptions that possess "inner perfection" and logical harmony, connections with a wide range of observations and the ratio of the world as a whole. These conceptions likewise stand the test of "external confirmation" permitting unequivocal derivation of the observable paradoxical results which thus lose their paradoxicalness.

Neither empiricism, nor apriorism can avoid epistemological pessimism, an epistemological deadlock. Reality of that which is achieved by science is proved by an unequivocal relation of observable phenomena with the ratio of the world. This relation cannot be the result of mistaken feelings nor can it be a subjective construction of reason itself.

That is why non-classical science, with its paradoxical experimental results that, within the life-span of a single generation, rapidly attain "inner perfection" in a new theory, does the identical as classical science, but it does so with such rapidity that it results not only in the conviction in the credibility of existence but also in an optimistic sensation of continuously "departing from the haven". Such a psychological attitude is characteristic of sharp and decisive turns in science, when science actually leaves the haven. For non-classical science, such turns are the essence of its everyday life, its constant and continuous development, constantly changing its fundamental principles or preparing their change.

28

PHILOSOPHY OF OPTIMISM

PART ONK. EPISTEMOLOGICAI. OPTIMISM

29

OPTIMISM, BEING, MOTION

ical conception of a particle containing a field, of a particle which participates in the interaction of other partticks, or possibly even in the universal interaction of all elements of the Universe. The existence of a particle, as has just been noted, includes its relation to the ordered whole possessing an objective ratio, which is not chaos but cosmos. The interaction of all cosmic elements includes universal existence into the particle's existence. Modern physics does not know particles isolated in space.

Nor does it know of particles isolated in time, particles without a past or a future. If we were to find a rational physical meaning in the transformation of ratio into a guarantee of being, long known to philosophy, it would appear that not only the instantaneous structure of the world is rational, but also its evolution. Each element of being must possess something to connect it with the spatial and temporal whole, each element of being is not only a real result of the past, but a realistic forecast for the future.

Here we will consider in greater detail the modern physical equivalents of abstract conflicts of being. We will deal with the totality of world points, i.e., the spatial and temporal positions of the particle, the assertion of its localisation at the given moment of time at a given point of space. What is the nature of the particle's motion that includes these four-dimensional world points; what is the four-dimensional world line of the particle? Is it a physical or purely geometrical image? Is a world line physically meaningful?

Mere transition from one spatial localisation to another, from given spatial coordinates to others, alongside the transition from one value of the temporal coordinate to another, does not yet constitute physical existence proper. Suppose that the world line is endowed with this kind of existence, that it cannot be reduced to a succession of fourdimensional localisations, that it is filled with events not reducible to such a succession. Suppose, for instance, that transmutational acts occur in world points, with particles of one kind being transformed into particles of another kind, with the new particle again reverting to the original type. It is irrelevant here whether there are

Thus, the paradoxical results of science work against epistemological pessimism, against unknowable substance, unknowable being. This epistemological function of the non-classical^experiment is realised through the "flight from miracle", the explanation of the paradox, its inclusion in a unitary picture of the world, in the ratio of the Universe. Knowledge of the world is cognition of the universal ratio, but not an abstract ratio, as is, for instance, the purely geometrical scheme of world lines, but a concrete ratio where individual elements retain their individuality, thus demonstrating their non-reducibility to the geometrical scheme, their physical existence. For this reason epistemological optimism is successively linked with the evolution of rationalism, which consistently encompasses the heterogeneous, contradictory, developing existence, and its sensual accompaniment, the "external confirmation" of rationalist schemes.

The basis of epistemological optimism is the real, objective ratio of the world. Epistemological optimism mainly consists in asserting this ratio. An optimistical evaluation of knowledge is primarily an assertion of reality, of the physical reality of its object and, consequently, its results. "Everything that is, is optimistic". In this phrase optimism does not characterise the evaluation but its object. This is not a stylistic error, but the concept of optimism extended to an objective situation which guarantees, or at least promises with a certain degree of probability the realisation of the optimistic forecast.

``Everything that is, is optimistic" is not a phrase of the Hegelian type, though this statement of fact is really close to its prototype. Indeed, "everything that is, is reasonable" includes in the definition of reality a certain structure apprehended by reason, a certain ratio; and this assertion reveails in each element of reality something that connects it with the ordered whole. As a matter of fact our contemporary associates Hegel's formula with the most rationalist (or even rationalistic, but with a strong isensual, empirical and even experimental accompaniment) images. Our contemporary will probably remember the quantum-mechan-

30

PHILOSOPHY OF OPTIMISM

physical grounds for such a hypothesis or not; we are dealing here with an artificially constructed illustration of a certain actual conflict.

Now a question arises: Is physical existence inherent in transmutational acts which are to guarantee the existence of the world line? Transmutation of the particle, transformation of one type of particle into another consists in the transition to a different mass, a different charge, to predicates which signify a definite world line, a certain behaviour of the particle in definite fields. Transmutation loses its physical meaning unless an eventual world line is forecast. The local event and the macroscopic whole, in this case, the world line embodied in it, are inseparably connected components of existence.

Let us compare this conclusion, a natural and almost self-obvious generalisation of modern non-classical science, with its historical prototypes. In fact, differential calculus, or rather the differential conception of motion, in essence, already contained the idea of an integral macroscopic process. We include in the ``now'', in the existence of a certain particle at the given moment, its speed, the limiting relation of its eventual motion with respect to time. For this, the predicted shift and the time necessary for it, are restricted to a point and an instant. Their limiting relation is the velocity of the particle. But that is not all. We forecast the acceleration of the particle and, again by restricting the forthcoming motion to a point, we can estimate the energy, mass and charge of the particle.

The inclusion of the spatial and temporal, macroscopic ratio in the local, individual being, the inclusion of the ``prognosis''---not a subjective forecast, but an objective, macroscopic process---is even more apparent in the integral principles of mechanics. The principle of least action requires that the actual world line of the particle be characterised by the minimal value of a certain integral. Thus that which is happening now and here, at the given moment and in the given place, depends on the kind of world line that connects each given world point not only with other spatial points, but with other moments, with other world points, with the past and the future. The local event depends on the integral result, on the character of the

PART ONE. EPISTEMOLOGICAL OPTIMISM

31

entire evolution moving from the past into the future. Actual motion is distinguished by the maximum or minimum value of the integral characterising the past as well as the future. Thus, the ``prognosis'' separates, as it were, the event on the real world line from events that may be imagined on other world lines, but do not possess existence in actual fact. The ``prognosis'' obviously becomes a property of existence and, unless we want to ascribe to Nature a conscientious goal, we must interpret ``prognosis' in an objective sense and allow the existence in Nature of a certain situation for which a prognosis without the quotes is feasible.

However, we must go further. The contemporary conception of a local event and the further behaviour of the particle conditioned by the local events is far from the classical ideal of unequivocal and exact dependence. In general, by determining the local event, the spatial and temporal coordinates of the particle, that is, by localising the particle in space and time we can only determine the probability of its velocity, i.e., "the prognostic" component of the local existence. This probability can be greater or smaller, "more optimistic" or "less optimistic" from the point of view of the realisation of the macroscopic law that defines the given world line, the given `` forecast'' and the forthcoming motion of the particle.

Is it legitimate to transfer the purely subjective conceptions into a region that knows neither prognostication nor evaluation, whether optimistic or pessimistic, of the forthcoming realisation of a goal set in advance? Of course, it would never occur to anybody to approximate this transference with teleological conceptions introducing conscious goals into Nature. We are dealing here with purely objective events and processes. But is it not an arbitrary and purely verbal operation to attribute ``prognostic'' and ``optimistic'' predicates, using quotes, to objective events, or even to consider quasi-prognostic, quasi-expedient and quasi-optimistic evaluations, not restricted by quotes? Is there a real connection, not arbitrarily inferred but objective, immanent and fundamental, between that which happens in Nature minus Man, and forecasts and optimistic and pessimistic evaluations (without either the "quasi"

32

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

33

prefix or quotes), that occur in Man's consciousness and nowhere else?

The legitimate transference and re-examination of concepts---their objectification, their application to objective events---follows from the real process of the objectification of Man's subjective conceptions from the realisation of his goals, from the existence of Man's purposive activity that is based on objective processes, suitably arranged on objective delimitations of processes in Nature, on their selection that conforms to Man's goals, on the choice of such initial situations whose combination predetermines the realisation of Man's aims. The seemingly simple statement, "There are objective processes in Nature, a certain combination of which may result in the attainment of a con-v scious goal", comprises a multitude of extremely diverse natural-scientific statements no longer presenting a picture of "Nature minus Man", but a picture of Nature as a totality of objects of Man's activity.

However, let us return to "Nature minus Man" and in the light of the above-said let us again pose the question: Is there in Nature, where there is no Man, not only ``more'' and ``less'' but also ``better'' and ``worse''?

The concept of optimism is not applicable to purely spatial, three-dimensional objects. Optimistic evaluations are applicable to processes, to something possessing a future, a better future. Consequently, the search for something ``better'' in Nature, the search for objective equivalents of optimism should first of all be directed to the most general description of temporal changes, the law of conservation of energy.

Of course, this law characterises the behaviour of the Universe and its elements with respect to time. But do we deal here with changes with respect to time? Has the law of conservation of energy anything to do with dynam: ic optimism which does not limit itself to a satisfied assertion of the immutability of existence?

And second: at first glance, the law of conservation of energy would seem to have nothing to do with ``better'' or ``worse'', but only with ``more'' and ``less''. The law of conservation establishes a quantitative commensurability of different forms of energy. Thus energy becomes homo-

geneous, as it were; it can be greater or smaller, but, in transitions from one form into another, both increase and decrease, ``more'' and ``less'' are excluded. But this is a comparatively simple version of the law of conservation of energy; Engels called it law of conservation in the negative: the law negates quantitative changes during qualitative transitions."" The qualitative and positive content of the law of conservation of energy consists in the proposition that energy, though it cannot be quantitatively created or destroyed, passes into qualitatively different forms.** Consequently it is heterogeneous, qualitatively non-identical. This is a very transparent illustration of the fundamental relation of identity, homogeneity, qualitative commensurability, on the one hand, and non-identity, heterogeneity, qualitative difference, on the other. Either pole without the other loses its meaning, purely quantitative conservation being a meaningless concept without a qualitative distinction, without two or more incoincident qualitative forms between which a quantitative identity is established.

That is by no means a purely logical construction. Physicists have long since been speaking about the future disappearance of distinctions between forms of energy, the transformation of all energy into thermal energy, the thermal death of the Universe. In a Universe that has gone through a similar evolution, the conservation of energy assumes no longer a negative, but a perfectly trivial zero meaning: energy does not increase or decrease in the transition from one form into another, because such transitions do not exist. The law of conservation of energy loses its physical meaning.

The prospect of thermal death is one of the destruction of cosmos, of its transformation into chaos. Can this prospect be called pessimistic? Intuitively an affirmative answer suggests itself: a prognosis forecasting the destruction of the world seems pessimistic, even if thermal death occurs far beyond the life-span of mankind, even in case thermal death does not eliminate local oases including our galaxy. We shall try to examine the sources of such an

* F. Engels, Anti-Diihring, Moscow, 1975, p. 18. ** Ibid.

3-01545

34

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

35

intuitive application of the concepts of pessimism and optimism.

The concept of Sadi Carnot that heat can pass from a hot body to a cold one, but cannot pass in the reverse direction, became the basis for the idea of the irreversible evolution of the world. In any process of heat transfer the difference in temperature decreases. If the difference in temperature can be increased in the given local system, it can only be done at the expense of a compensating and excessive levelling out in the environment or in other systems, in the world in general. Thus, a levelling out of temperature threatens the world. However, the transition of heat into mechanical energy is impossible unless there exist temperature gradients. When mechanical energy passes into heat (and this happens more or less constantly), in the total balance of nature the possibility of a reverse transition becomes increasingly smaller since temperature gradients are consistently levelled out. What the future holds in store for the Universe is transformation of all energy into heat, levelling out of heat distribution, the disappearance of temperature gradients, the disappearance of energy transformations, the conservation only of molecular motion equally disorderly and chaotic everywhere, without macroscopic gradients or macroscopic structure.... That is what thermal death which we have discussed above means.

Philosophy, in particular the philosophy of Engels," and 19th century statistical physics advanced rather convincing arguments against thermal death. Modern science, the theory of relativity and relativistic cosmology and, to no lesser extent, quantum mechanics, forces us to interpret the thermodynamics of the Universe from new standpoints that assumedly eliminate the inevitability of thermal death, although they still do not offer any concrete and unequivocal conception of the cosmic mechanism of forming temperature gradients, contrasted to thermal death.

The measure of the disorder of molecular motions, the measure of the levelling out of heat, of the obliteration

of temperature gradients is called entropy. The same magnitude, but with a minus sign, i.e., the measure of macroscopic ordering, the measure of non-uniformity in the distribution of heat, the measure of the differences in temperature---temperature gradients---is called negentropy (negative entropy).

Nowadays the concepts of entropy and negentropy have assumed an extremely generalised character. In the theory of information and in modern theory of probability, entropy is the measure of indeterminacy, the proximity of the probability values of different events. If all events have the same probability---the forecast is least determinate, entropy is at its maximum. If the probability of one event is equal to unity, and that of others is zero, the indeterminacy disappears turning into determinacy with minimal entropy and maximal negentropy. As a result of the experiment there disappears the existent indeterminacy measured by entropy. The disappearance of indeterminacy is information; it is measured in terms of entropy that has disappeared.

Thus, entropy is a measure of macroscopic equilibrium, homogeneity, structurelessness, a measure of chaos in micro-processes, their liberation from macroscopic ordering. Negentropy is a measure of such ordering, a quantitative measure of the subordination of micro-events to the macroscopic, and ultimately, to the cosmic ratio.

Let us view Nature in its negentropy aspect and examine the local processes of the increase of negentropy and decrease of entropy due to the growth of the latter in the environment, in the inclusive system. It is such local processes that transform chaos into space. The very concept of space assumes thereby a dynamic meaning; space not only exists at the given moment, it is developing, it is being created, and the created space (that which Spinoza called "natura naturata"} turns out to be a creating space (Spinoza's "natura naturans"}. The local gradient that has appeared, the negentropy, the ordering of existence cause, or at any rate may cause, other processes bringing about another gradient, another ordering; from this viewpoint the world moves not towards derationalisation, not towards the reign of entropy, but towards structure, order-

3*

* F. Engels, Dialectics of Nature, Moscow, 1974, pp. 35-39, 284-85.

36

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

37

ing, ratio. And, apparently, this process of rationalisation, of ordering of increased structuralness of the world, is not limited by the fatal climax of thermal death.

Why does the picture of the formation of local negentropy cause an optimistic reaction in man?

It is precisely because negentropic processes represent the basis for Man's purposive activity, and here, in the analysis of such processes, the objective forecast becomes the source of subjective perception of the future, of subjective evaluation of the future---its optimistic evaluation. Such a relation of quasi-expedient processes with expedient processes, ``optimism'' in Nature and optimism with no quotes, prevents the similarity of ``optimism'' and optimism with no quotes bearing an arbitrary character. Th^s similarity is based on a real and quite fundamental relation of elemental (but regulating, negentropic, structureforming) processes in Nature and purposive processes in technology, in Man's labour, in his activity.

It may seem that the growth of entropy, the changes and complexity of macroscopic ordering of existence conform, as it were, to optimistic forecasts, while the laws of conservation and symmetry can only serve as the basis for static optimism: "so it was and so it will be". But the dynamic and static forms of optimism, divorced from each other, lose their meaning. If Man were not sure that the processes of change are subordinated to a certain permanent law expressed in the invariability or invariance of some correlation, in the conservation of a certain value, in symmetry, in identity, then optimism would cease to bear the character of scientific prognosis. Conservation loses its qualitative, positive, physical meaning without changing certain correlations, but change, too, in the absence of laws of conservation, invariance, symmetry, iloses in its turn its regular character. In the absence of macroscopic processes, i.e., identical, uniform, ordered movements of micro-particles, the very differences in the behaviour of particles lose their meaning. The concept of acceleration loses its meaning in the absence of the concept of inertia; the study and reproduction of motion would be impossible if identities of velocities were not stated. The conception of invariance, conservation, ordering, determinateness of

existence is an essential component of optimism, without which optimism would be impossible.

The unity of identity and non-identity is the basis of optimism. A picture of complete disorder, complete negentropy, complete absence of macroscopic processes, in other words, a picture of chaos may induce a pessimistic evaluation and a pessimistic mood. Yet, a similar evaluation and similar mood may be induced by a picture of complete identity of individual acts, i.e., a picture of the world reduced to the macroscopic aspect alone, void of a micro-structure, a view of nature as of something reminiscent of a battle scene in War and Peace by Leo Tolstoy ("Die erste Kolonne marschiert. . .").

The concepts of entropy and negentropy allow to demonstrate quite clearly the relation, moreover, the unity, of the two above-mentioned pessimistic conceptions. Maximum entropy, complete absence of macroscopic gradients, excludes microscopic acts, rendering them meaningless. But at the same time maximum negentropy and maximum entropy exclude the possibility of predicting actual processes; a temperature gradient without entropical molecular motion, is by no means a physical reality, but a fiction which cannot arouse any optimistic reaction.

An optimistic reaction is aroused by a macroscopic gradient which determines the regular transformation of heat, this transformation is real, including entropic uncertainty of the motion of selected molecules. That gradient like any manifestation of the growing real negentropy of the world, which does not exclude the opposite pole, means (on the strength of its regularity!) a certain identity, invariance, conservation.

INITIAL CONDITIONS

The basis for optimism is a regular, determined evolution of existence. But, on the other hand, optimism is founded on the conviction that this regular evolution coincides with Man's goals determining his conscious activity. Thus, the philosophy of optimism must proceed from a certain synthesis: (1) knowledge which reveals the determined

38

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

39

evolution of existence, and (2) Man's activity. This fundamental conflict passes through the entire history of philosophy. In the present book dealing with epistemological optimism, optimism of science, this conflict assumes the form of a question: does science possess a goal?

The conception of goal is the transition from forecasting to planning, from asserting objective processes to arranging them in such a way as to bring about the realisation of the ideal image that appeared earlier. Is science an expedient activity? Does the goal, i.e., the situation formulated beforehand in consciousness, determine its ways, its structure and the evolution of its content?

The basic and firm concept of science as the tsearch for the unknown contradicts it, as it were. Under the influence of immanent stimuli science seeks the unknown proceeding from the contradictions; it seeks not to get off the path of purely causal analysis, ignoring the pragmatic idola referred to by Francis Bacon.

And nevertheless science is purposeful activity.

This is to be inferred not only and even not so much from the applied function of science, as from the epistemological considerations themselves, from the role of experiment in science, from the general epistemological premise: adequate knowledge of Nature, knowledge of substance and the objective substratum of phenomena, conviction in the existence and knowability of such a substratum stem from the impact transforming the objective world.

Let us consider closer the impact the goals of science exert on the evolution of its content. Evolution should be emphasised here: content itself does not depend on the character, direction or driving forces of the evolution, though they bring influence to bear upon the effectiveness of science (we shall touch upon this point further on). As for optimism---the correlation between the goals of Man's activity and the forecast of objective processes---the degree of this correlation, the value which could be termed a measure of optimism, depends on the goals of science, on the objective regularities revealed by iscience, on the content of science.

Which side, element, part of the objective processes

proves to be the most plastic, where does Man get involved in the play of elemental forces and to a certain measure submit them to himself?

Here again mention must be made of the concept of negentropy, ordering of the world, the concept of the world as a whole, to emphasise the heterogeneity of the world, the autonomy of selected series of phenomena and at the same time the dependence of smaller, included, systems on bigger inclusive systems, and vice versa.

Nature as a multitude of such inclusive and included systems confronts Man. In Nature events occur and processes take place independent of Man, which also occurred and proceeded in a similar fashion long before Man appeared on the Earth. When studying Nature, Man finds in it transitions from one system to another. The zones of transition, the zones of difference and connection between systems prove to be the most plastic; it is here that primarily begin the expedient processes of transformation of Nature, production, civilisation, labour. The inclusive system transmits to the included system a certain stock of negentropy ensuring the possibility of further expending this stock and the increase in entropy. These are zones where Man's reason brings most tangible influence on the structure of being. Once V. I. Vernadsky introduced the concept of the Earth's noosphere, a sphere which unlike the lithosphere, the hydrosphere and the atmosphere bears distinct marks of reason. Now it is time to generalise this concept. Man's reason and labour found noozones in the deepest entrails of the Earth, in the peri-terrestrial space, far away from our planet, in the atomic nucleus, and in the living cell. The concept of ``noozone'' will be the central in this book, and the analysis of noozones, its main content. In the noozones of the radiation spectrum, of the hierarchy of the discrete microcosmic elements, of the macrocosmos, of ontogeny and phylogeny, in the noozones of the world, the correlation of Man's goals and objective processes finds its realisation to serve as a basis and measure of optimism. But not to anticipate things, we will give here a general outline of those features of the objective processes, through which Man's purposive influence on these processes is realised. These are the initial

40

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

41

conditions of Nature's processes subject to differential laws. In mechanics of the microworld subject to essential conditions and restrictions one can unambiguously determine the motion of the particle on the basis of differential laws of motion. Equations of analytical mechanics, however, are not sufficient per se for an unequivocal picture of motion with specified force fields. The orbits of planets are determined not only by a combination of inertia and gravitation; a planet's motion is determined by the initial conditions, initial positions and impulses, "the initial cast" which Newton attributed to God, and Kant, to a previous cosmological evolution. These conditions do not come under expedient influence, but on the Earth the initial conditions are purposefully modified to an ever greater extent; this is one of the most important definitions of civilisation. Nobody can abrogate the law forcing water molecules to move, driven by the force of weight, from a higher to a lower point. But the difference of potentials---negentropy forcing the water molecules to move uniformly, as a general rule, is modified in the construction of a dam.

Let us take another example with regard to entropy and negentropy in the initial, true thermodynamic meaning. The motion of molecules is chaotic; the chaos of these motions increases; heat passes from a hot body to a cold one, the structural organisation of heat distribution diminishes. None of this can be altered. But the initial negentropy, the initial conditions, the initial temperature gradients are modified. When burning coal under a boiler, forcing the steam to pass into the cylinder and then into the condenser, Man approximates, in time and space, the poles of the natural gradient, just as in case of a dam the high water and low water marks are brought closer.

To cite another example. The ontogeny of living beings is encoded in each embryo and is determined by initial negentropy, the initial structure of the embryo. Yet, the destinies of organisms also depend on the chaotic, generally speaking, external influences that are somewhat ordered on the whole and result in regular progress of phylogeny. Man's purposeful activity is aimed at all initial conditions: the hereditary code (artificial mutations are in the initial stage, they are often spontaneous and contradict

Man's aims), environment (for instance, agronomy) and the mechanism of environmental influences (artificial selection).

This somewhat simplified scheme illustrates the connection between the initial conditions of negentropy in Nature with Man's purposive intervention in it. It is these initial conditions that Man's purposive activity is aimed at, presenting as they do, the most plastic component of world harmony, of the macroscopic structure of the world. The transformation of this component, the transformation and increase in world negentropy is the physical determination of all concrete goals of labour and concrete indices of progress. Naturally, Man establishes a close relationship between the physical content of his activity and the objective processes of the structuring in the world, and of the increase in negentropy, processes which are the immediate objects of this activity, and includes these processes in his optimistic evaluation.

But the structure of the world, its ratio is a component of existence. It is not worth repeating the arguments of modern science in favour of the dependence of the existence of particles on their interaction, and the philosophical propositions concerning the embodiment of the whole in individual existence. Neither is it worth recalling once again the illusory whole, while individual existence is ignored, the geometric character of the unfilled world lines. Let us rather remember other concepts, removed from physics (but not too far removed).

19th century literature features an immortal image of illusory existence that lost one of its components. It is St. Petersburg, a phantom city, flashing through Dostoyevsky's novels, where "everybody is apart", where no idea, activity or organisation unites people. This illusory, granulated existence is complemented by an illusory ``uneventful'' Universe, of which the devil speaks to Ivan Karamazov, and illusory universal harmony ignoring individual existence, individual destinies. Ivan Karamazov speaks to Alyosha about such an ``Euclidean'' and ``non-Euclidean'' harmony. Dostoyevsky's pessimism is directed here at a world in which the link is broken between the whole and the individual, between the macroscopic ratio and its mi-

42

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

43

croscopic content. This, however, is not a moral, but rather an ontological evaluation: The world without such a relationship possesses only illusory being; it is a phantom. Pessimism is inseparable from the assertion of nonbeing. But the relation to the whole is manifested in the inclusion in individual existence of ``prognostic'' predicates, in the inclusion in the world point of the eventual world line of the particle, its velocity, acceleration, energy, mass, charge, in its relation with world negentropy in that which serves as a basis for optimism. Dostoyevsky's pessimistic phantoms are negative statements of the connection between existence and optimism. Everything that is, is optimistic.

THE ``IS'' AND THE "OUGHT TO BE"

Philosophy of optimism transcends the purely passive perception of the world. Passive knowledge does not guarantee authenticity of its results, the reality of its advance to truth; only blended with action does passive knowledge acquire confidence in the existence and unlimited knowability of the world---that which has the right to be termed epistemological optimism.

The transition from knowledge to action was always a stumbling block for classical philosophy, for its predecessors. Ancient philosophy---at least those of its representatives that fully retained the antique harmony of perception, thinking and will---did not concern itself with this kind of transition, but it became a fundamental problem in the Middle Ages to remain one in the philosophy of the Renaissance and modern times. Since the appearance of Marx's theses on Feuerbach, when philosophy set itself the task not only to interpret the world but to transform it, the relation of knowledge to action, of science to morality, underwent a radical change. In non-classical science the new relationship between these polarised realms became apparent: both in terms of the ``is'' as an object of knowledge, and in terms of the "ought to be" as the content of norms, goals and ideals. Accordingly, optimism---a correlation of the ``is'' and the "ought to be"---has assumed a new meaning and new importance.

In his article "La Morale et la Science", Henri Poincare says that morality and science, the "ought to be" and the ``is'' cannot be united through the logical deduction of one from the other, since science is concerned with the Indicative Mood and morality, with the Imperative Mood* Indeed, declarative statements of the type: "such an object exists", "such a process occurs", "such an event happened", as well as others of the more complicated type: "the cause of the event was..." (all these being in the Indicative Mood), cannot be obtained from statement in the Imperative Mood of the type "one should act in such a way...", and vice versa. This logical independence of scientific and moral statements seems absolute, but is it actually so? In 1951 Albert Einstein wrote to Maurice Solovine:

``That which we call science has exclusively one aim, to establish that which is. The determination of that which ought to be is something quite independent of it and cannot be attained in a methodical way. Science can only formulate propositions about morality in a logical connection and furnish means for the realisation of moral aims, but the determination of the aims themselves is beyond its domain."**

In essence, already here the independence of the ``is'' and the "ought to be" and of the Indicative and the Imperative moods, of science and morality does not appear to be so absolute. The "ought to be" is determined independently only to a certain degree. In the Imperative Mood only the aim itself cannot be derived from the Indicative Mood, from the statement of the ``is''. And the ways of realising the "ought to be" and the logical structure of its definitions depend on science. In his talk with the Irish writer Arthur Murphy, Einstein said that science possesses moral foundations that are not connected with the content of scientific propositions, but with their dynamics, their change, their evolution. Moral self-- consciousness stimulates scientific progress. "It is here that the moral side of our nature comes in---that mysterious inner con-

* H. Poincare, Dernieres Pensees, Paris, 1913, p. 225. ** A. Einstein, Lettres a Maurice Solovine. Paris. 1956, p. 105.

PART ONE. EPISTKMOLOGICAL OPTIMISM

45 44

PHILOSOPHY OF OPTIMISM

secration which Spinoza so often emphasised under the name of amor intellectualis. You see then, Einstein goes on, "that I think you are right in speaking of the moral foundations of science. But you cannot turn it around and speak of the scientific foundations of morality."*

It is to be noted, however, that in spite of his skeptical attitude to the possibility of a scientific explanation of morality, Einstein nevertheless came to the conclusion that science in its turn, affects the morals of mankind. "This popular interest in scientific theory brings into play," he said in the same talk, "higher spiritual faculties, and anything that does so must be of high importance in the moral betterment of humanity.''^^5^^'"*

If science is understood as the content of certain statements, as something stable and divorced from the process of its evolution, transformation and change, and if morality is understood as the content of certain norms abstracted from their genesis and realisation, then science and morality are indeed independent of each other. However, as soon as we destroy the immobility of the statements, on the one hand, and of the norms, on the other, as soon as science and morality emerge in their concrete varying essence, their independence of one another becomes arbitrary and relative.

In the concluding chapter of her book Histoire du principe de relativite, M. A. Tonnelat says that morality like philosophy, like art, cannot add anything to the inner harmony of scientific theory. They cannot make it more perfect, just as the most sophisticated analysis cannot make a Mozart symphony more perfect.*** This holds for the content of scientific theory. As far as changes in content are concerned, science draws inspiration in art, morality, philosophy.**** Their isolation gives way to their dynamic relationship. The closer the link between the ``perfection'' of the positive content of theory and its openness, the

more relative is the isolation of science from other genres of Man's spiritual life.

Hence the changes in the relationship between science and morality during the transition from classical to nonclassical science. In classical science, the positive and allegedly terminative content of scientific propositions, could to a considerable extent be divorced from its negative accompaniment, from contradictions, from the ``inquisitive' line of scientific progress. At present positive content is practically inseparable from dynamics; the understanding of science cannot be separated from its reason. Likewise the character of morality changes as the emphasis is shifted from norms to ways of realisation; not only the norms of good but their development, their implementation, the transformation of the "ought to be" into the ``is'' becomes essential in the self-consciousness of mankind. The optimism that grows out of contemporary science is inseparable from moral self-consciousness. We wish to recall the criticism of rigid canons of morality in dialectical philosophy, in art, and in culture. We shall restrict ourselves to a few fragmentary reminiscences.

A rather accomplished form of stable moral canons is the classical categorical imperative: your acts must be examples of universal norms, every act may become a universal norm. The inclusion of the individual act in the general norm does not change the latter. Such stable morality is historically linked with stable culture, stable or slowly changing conditions and norms of social life, with a stationary or quasi-stationary economy. In the Middle Ages, morality was embodied in traditional norms, the good was that which was sanctified by tradition, moral norms regulated the economy, and guaranteed, to a certain degree, its traditional character: typical of medieval concepts were "fair price", "fair profit", "fair per cent". Optimistic prediction consisted in forecasting customary and therefore ``fair'' norms and conditions. They are compatible only with such conservative optimism: "so it was, and so it will be". Sometimes a quasi-dynamic conception was put forward: very high standards, unrealisable in the absolute sense, indicated an endless path to moral perfection. But this does not imply any real moral ideal. At times

197

* A. Einstein, Forum, No. S3, 1930, p. 373. ** Ibid.

::"** M.-A. Tonnelat, Histoire du principe de relativite, Paris,

:;-.'.<•«•*

487.

Ibid., pp. 488-89.

46

PHILOSOPHY OF OPTIMISM

PART ONE. EflSTEMOLOGICAL OPTIMISM

47

the traditional conceptions of good painted the moral world in a single colour, without hues, in the image of a uniform or homogeneous physical world without nonexistence, as it appeared in Cartesian physics. The good seemed to be uniformity of existence imbued with " continuous hosannah". We have already mentioned this term; it appears in Dostoyevsky's The Brothers Karamazov spoken by the devil who brings the thoughts of his interlocutor to their logical conclusion, thoughts which seem unbearable to Ivan Karamazov and to Dostoyevsky himself, whose interpreter, in the final analysis, is "the certain kind of Russian gentleman, with not much grey in his hair"---Ivan's infernal guest. The devil says to Ivan: "Without criticism, it would be nothing but one ' hosannah'. But nothing but `hosannah' is not enough for life. The `hosannah' must be tried in the crucible of doubt. . . ."*

The earthbound and emphatically common devil of Dostoyevsky says something extremely fundamental, something very similar to the remark of his much more imposing and philosophically educated colleague from Faust. Mephistopheles identifies himself to Faust as "part of that Power which would the Evil ever do, and ever does the Good". "Would the Evil do" means: destroys ``hosannah''. "Does the Good" means: transforms the good, from a stationary, rigid canon, into something historically realisable and developing.

Like Karamazov's devil, Mephistopheles expresses certain thoughts and personifies a certain aspect of the mentality of his permanent fellow-traveller and interlocutor and, in the final analysis, his creator as well. Faust departs from science, because the elusiveness of thought identifying existence and making it uniform, does not satisfy him. In the lines of Faust there can be detected Goethe's antiNewtonian, sensualistic and emotional tendency. Science as the sum of ultimate and eternal results, as the kingdom of pure thought, unencumbered by contradictions, impressions and emotions---is the ``hosannah'' to knowledge. In the same manner, the philosophy of the identified uniform

good is the ``hosannah'' to morality. Well, Faust departs from reason, from science, from the good to conclude a bargain with the spirit of evil. But the reason, science and good that he rejects are homogeneous and immobile Wagnerian ideals. They seem to Faust lifeless and elusive. Faust wishes sin and evil, and he pursues it not so much under the guidance of the spirit of evil, as with the latter's technical support. This dialogue, however, is the unending argument between good and evil. It will stop when Faust demands of the moment: "Ah, linger on, thou art so fair!" This is absolute victory, the identity of each successive moment with the foregoing one, the cessation of existence, death. This is the absolute victory of the good. But Faust overcomes death in work, in creation, i.e., in a process that cannot be terminated. The finale of Faust is the apotheosis of the good that does not exclude evil, but battles against it, the apotheosis of the dynamic moral ideal.

In his analysis of Feuerbach's philosophy, Engels contrasts the conception of the historical evolution of good and evil and their istruggle, the conception of the reality of evil, to the conception of Man's natural morality. Man is not only good, says Engels, he is evil as well. "But it does not occur to Feuerbach to investigate the historical role of moral evil," writes Engels/^^1^^" In this respect Feuerbach departs backwards from Hegel. "He appears just as shallow, in comparison with Hegel, in his treatment of the antithesis of good and evil."**

Indeed the problem of good and evil, as posed by Hegel, is a way out from the static moral ideal, a transition to a dynamic moral ideal, to the struggle between good and evil, and thereby to human existence which distinguishes Man from Nature, contrasts him to Nature, and leads him to purposive arrangement of Nature's elements. Hegel is opposed to Rousseau's ideas, to the conception that Man is good by nature and must therefore remain true to Nature. "Man's coming out of his natural existence is the

* F. M. Dostoyevsky, Collected Works, in ten volumes, Vol. 10, Moscow, 1958, pp. 169-70 (in Russian).

* Karl Marx and Frederick Engels, Selected Works, in three volumes, Vol. 3, Moscow, 1972, p. 357. ** Ibid.

48

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

49

differentiation of Man as a self-conscious creature from the outer world."*

But this is not sufficient. When Man is contrasted to Nature, and only Nature, he is isolated. Man, in Hegel's opinion, thus becomes "the only man" and in this sense he does not rise above Nature, its struggle of everybody against everybody. Man's egotistical activity is curbed by the framework of the law. "Man remains a slave of the law until he gives up his natural position.""''* Man ceases to be a Slave to the law within the framework of social solidarity, transforming the very laws, transforming his social existence. Herein, as Marx showed, lies the dynamic moral ideal.

A man who has rebelled against static existence and static morality faces two ways: one to a dynamic moral ideal, the other is the Stirner way to "the Ego and His Property". The second is an illusory way of liberation, leading to slavery. The Nietzschean rebellion against moral canons paves the way to the vindication of tyranny.

Nietzsche spoke against static canons, saying that the good and the just are a danger for the future. "They say and feel in their heart: 'We already know what is kindness and justice, we already know them; woe to them who are searching here.' Whatever harm the evil ones might cause, the harm of the good ones is the most harmful harm. O my brethren! Somebody once looked into their hearts and said: 'They are Pharisees.' "***

Nietzsche's rejection of good is an absolutised rejection of static moral canons, directed against any moral canons, including dynamic canons, which is a vindication of the disintegration of the social fabric. This vindication leads to that which is called ``anomy'', "deviant behaviour", etc., and of that which embraces all forms of aggresisive individualism and isolation of Man from the social structure, beginning with drug addiction (including intoxication with hysteria in speech-making) and ending with crimes (among them state-organised crimes). This vindication

presents a very wide range beginning with theoreticians who never intend to put their ideas into practice, to Smerdyakov's words: "Everything is permitted." It must be said that Smerdyakov's grinning at Ivan Karamazov does not threaten the theoretician who has turned away from traditional moral canons, but it threatens the theoretician who has turned away from any canons including dynamic ones.

What practical activity conforms to dynamic moral principles?

It is necessary here to return to Nature as a basis of individual isolation and evil, i.e., to Hegel's conception. Man remains a slave of Nature until he purposively arranges its processes. But where does Nature provide a possibility for such purposive arrangement?

Nature, as Hegel understood it, did not provide such possibilities. Nature is a stable other-being of the developing spirit. It is governed by laws which predetermine in an absolute manner individual processes and are independent of application. But Nature, as presented by 19th century classical science (and to a igreater extent by 20th century non-classical science), opens up a possibility of a purposive interference in its processes and, which is very important, interference in increasingly fundamental processes.

The Universe as a totality of purely mechanical objects and processes is subject to the Laplace determinism, equations of motion predetermining the position of each particle at each given moment. But, as has been stated in the previous essay, equations ileave Man initial conditions, which he arranges in his interests. Man builds dams and constructs water wheels to create initial conditions for the motion of water. Manipulating the initial conditions, he arrives at an expedient combination of determinate processes. In the age of steam engines his expedient activity determined not only mechanical processes, but also transitions of heat into mechanical work. Modern technology deals with a purposive rearrangement of nuclear processes, micro-processes become the beginning of macroscopic chain reactions---a model of the effect of individual events on large-scale systems embracing them. This model corresponds to Man's position in modern production, when

4-01545

* Georg Wilhelm Friedrich Hegel's Werke, Vol. 6. Berlin, 1840, p. 58.

** Ibid., p. 59. *** F. Nietzsche, Werke, Bd. VI, Stuttgart, 1921, pp. 309-10.

50

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

radical transformations of the technological process within the shop, enterprise, industry and economy as a whole are increasingly becoming the content of labour.

The behaviour of an individual in the feudal society was determined by tradition founded on religion---the immutable civita del---"God's city". Then scholasticism sought to provide traditions and dogmas, and moral canons in particular, with a logical foundation essential for the theocratic authority of the church. The Rennaissance liberated Man from traditional scholastic moral canons, but Man became a victim of the secular tyranny of absolute monarchies and oligarchic republics. Later the authoritative regulations of Man's behaviour were replaced by the elemental power of statistic laws ignoring individual interests and destinies. And finally, in our times Man's destiny is freeing itself to an ever greater degree from the elemental laws that ignore it, and dynamically developing moral principles are becoming canons of individual behaviour.

Such ``moralisatiori'' of Man's behaviour does not necessarily mean that it is traditional, immobile, invariable. Traditionalism was the result of old moralistic requirements that claimed an a priori character. Now `` moralisation'' means something quite different, namely, freedom of Man's behaviour. Freedom in Spinoza's sense---- behaviour does not follow from external influence, but from Man's nature, his inner essence, i.e., from something inherent in Man that distinguishes Man from Nature and singles him out in Nature. This is a sensation of relationship between the individual and the society and the world. But it is this sensation nurtured by the study of the world that stimulates the development of such study. At present it is driving the study of the world along the non-classical path. Non-classical science, as has repeatedly been stated in this book, considers each element of the Universe as a reflection of the whole, including that which Geoffrey Chew termed the crisis of elementariness, and sees in an elementary particle, in any ``elementary'' object an infinitely complex focus of the infinitely complex space.

But the ``moralising'' function of modern science is inseparable from the goals Man sets himself in his activity

of transforming the world and from the conception of the realisation of such goals.

Thus, alongside the epistemological component the concept of optimism embraces that which has always been related to the realm of will, to the domain of goals and their realisation. This realm comprises moral principles that have already been discussed in this essay, and all Man's activity in realising his aims---industry, work, civilisation as a whole, which will be treated later on.

There is, however, another side to Man's spiritual life that cannot be reduced to intellect or will. That is the world of emotions. Optimism, as interpreted in this book, is not reduced to emotions. This concept has here an ontological, epistemological, moral and, as we shall see further, an economic meaning. But it is natural that any definition of optimism should retain emotional content. The evolution of optimism includes, to a considerable extent, the changing attitude of the intellect to the will and to the world of feelings, to Man's emotional life.

How does the above-mentioned attitude change under the influence of modern science, what new aspects does it add to the problem of Logos and Eros?

The answer to this question illustrates the function of non-classical science that has been repeatedly pointed out: it renders the historical evolution of classical science and classical rationalism more obvious. In this case non-- classical retrospection affords a clearer view of the connection between optimism and rationalism.

Rationalism is the philosophy of optimism to the degree it includes the sensualist component, to the degree it combines the macroscopic order with the autonomy of microobjects, logic with its emotional accompaniment, to the degree it is a philosophy of being. As was pointed out in the first essay of this book, being is characterised by objective ordering, negentropy, objective ratio. This thesis is totally unrelated to Aristotelian entelechy, the idea of an intelligent demiurge of the world or of the "world soul". There simply exist in the world real macroscopic systems which make the world comprehensible to reason, but they existed before and independently of it. It is this comprehensibility of the world that Einstein considered its main

4*

52

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

63

riddle (the most incomprehensible thing about the world is the fact that it is comprehensible), and indeed it reflects the infinitely complex hierarchy of structures of the Universe.

Logic, reason---all that which is united by the conception of Logos, finds an objective basis in the knowledge and transformation of Nature. Man's moral ideals likewise find a basis. And what about his emotions, his feelings---all that which Herbert Marcuse refers to Eros? What is the relation of this Eros to Spinoza's amor intellectualis, to that emotional uplift that accompanies the knowledge of truth, the comprehension of the world?

Amor intellectualis has assumed a very full-blooded and complex character in non-classical science. Now it is not reduced to a permanently elated, bright, but essentially one-coloured state of the intellect in ,search of a world substance---it includes a multi-coloured spectrum of bright and gloomy moods, satisfaction, disappointments, new inspirations, esthetic impressions, sorrow, joy, sympathy, doubts, newly acquired confidence, hopes, their ruin, appearance of new hopes, all this bright emotional life being coupled with an uninterrupted connection with society and Nature. Science has hardly ever provided so few grounds as now to confront Eros and Logos, and never has this confrontation been so superficial, so far from reality and nevertheless (and may be, for this very reason) -so frequent.

The evolution of science has always been connected with emotions, with an emotional uplift, which made it a human science that would not be able to develop without such a connection. Lenin said that "there has never been, nor can there be, any human search for truth without human emotions".*

Lenin emphasised: without emotion there can be no search for truth. Unlike truth itself, its content, the results of the search, emotions cannot be separated from the search for truth, from the development and modification of truth, from the transition to a more exact, concrete and

fundamental truth, from the transformation of the picture of the world.

If absolute truth is an infinite series of increasingly precise reflections of existence, an infinite series of relative truths, a stream of ever new increments to a credible knowledge of the world, then every result of scientific research, every accomplished, positive stage in the picture of the world approaching its inexhaustible original, possesses, in addition to its positive content, dynamic value, unresolved questions, new stimuli for further development, for further search, for further specification and concretisation. In non-classical science the dynamic value of its results is becoming obvious and immediately tangible. Consequently, scientific results likewise possess, if not emotional criteria of truth, obvious emotional stimuli, an obvious emotional accompaniment.

The genuine and very intensive emotional content of modern knowledge is quite essential for resolving, not only logically, but also psychologically, one of the eternal problems of philosophy, that of death and immortality of man. We shall presently deal with this problem. Immortality will be viewed as a local characteristic of the existence and consciousness of man, just as infinite space in cosmology serves as the characterisation of local processes, of the real physical situation in "here and now", as the characterisation of the "here and now'' content, of the intensity of physical processes occurring here and now.

OPTIMISM AND IMMORTALITY

Optimism is confronted with the pessimistic shadows of death, death of the world: Nature bereft of one of its components of existence, its ratio, its macroscopic structure, or another pole---the individual existence of autonomous elements, becomes a phantom. Then there is death of knowledge, exhaustibility of knowledge and, finally, death of Man.

Does Epicurus' formula drive away this last pessimistic shadow? Let us recall it. In his letter to Menoeceus Epi-

' * V. I. Lenin, Collected Works, Vol. 20, p. 260.

54

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOI.OGICAI. OPTIMISM

55

curus says that Man never meets death: "So death, the most terrifying of ills, is nothing to us, since so Jong as we exist, death is not with us; but when death comes, then we do not exist."*

Why has not this logically irreproachable formula saved mankind from fear of death?

We wish to call attention to the negative and static character of the formula. Everything good and evil, according to Epicurus, is included in sensation, and death is the absence of sensations. Far from being optimistic, this formula is essentially only un-pessimistic. The pessimistic perception of life, the perception of its perishability and the fear of non-existence are not confronted here by an active and positive optimistic perception that could not only logically discredit the fear of death but also reniove it from consciousness. Epicurus' philosophy as a whole is negative and static. Happiness lies in the absence of futile aspirations. Such harmony of life corresponds to the static ideal of cosmic harmony. An optimistic perception that could free Man from the fear of non-existence is the perception of the fulness of existence. Thus we go back to the initial definition of optimism.

In his philosophy of nature Epicurus seeks to affirm existence by filling Nature with spontaneous deviations of atoms. But these deviations remain purely local events, never changing the macroscopic world. Spontaneous atomic deviations retain freedom in Nature and are contrasted to fatalism, "the power of the physicists", the autocracy of macroscopic laws. But this is local and negative freedom; individual deviation does not become the starting point of a chain reaction, it is not in confrontation with the power of macroscopic laws and far from changing them, it only restricts this power.

In Epicurus' philosophy Man is liberated from fear of future non-existence. In his local existence, he must not think of that which seems threatening to this local existence. Death does not in fact threaten Man; he lives now, in the restricted time limits of his existence. Non-existence does not frighten him because it is beyond the limits of

local individual existence: where there is death, we are not there, we do not exist there. The given ``we'' and ``exist'' do not extend to the infinite future. Solitude in infinite space and time that filled the soul of Pascal with such chilling horror, seemed a refuge for the ancient philosopher who is loath to think about the infinite time that had flowed before him or the infinite time to flow after him. For him they are equivalent. Epicurus wonders: why does Man fear the future infinite existence and is indifferent to the past infinite existence? Epicurus rejects both as alien to Man. Man is confined to the ``here'' of the limits of the Earth, and to the ``now'' that encompasses his short life. But this is a logical refutation of the fear of death. Apparently even in ancient times it was not psychologically active, it was not realised in Man's psychology. The contemporaries of Epicurus must have felt not so much liberation from the fear of death as the transformation of this fear into the quiet reconciled sadness that permeates the Odyssey.

For Pascal the conception of infinite space and infinite time, the imagined crossing of the boundaries of local existence transformed life into an instant; infinity transforms finite existence into zero, into nothing. But in the final analysis, it is exteriorisation of Man's life that destroys fear of death in modern philosophy. In the optimistic conceptions of the Renaissance and the Baroque, infinity does not lie beyond the limits of individual, local and finite existence. 17th century science considers a point as the beginning of an eventual, fundamentally infinite line, and an instant, as the beginning of an infinite process. For modern Man future and past are not equivalent, time is not symmetrical, the future is an arena where personality is exteriorised, an arena filled with the results of Man's activity. This is an active perception, it is local existence, filled with eventual existence. Epicurus does not recognise a future without some sensual content; death is the absence of perceptions, and therefore it does not exist, death is alien to Man, for he does not meet it. For modern optimistic philosophy the future complements the present and becomes a component of his existence to such a degree that Man cannot exclude himself from the future. If the

Cyril Bailey, Epicurus, The Extant Remains, Oxford, 1926, p. 85.

56

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

57

content of human existence is knowledge and activity directed at the future, at infinity, then that content is not discontinued by death. A new conception of immortality arises: Man perceives infinite activity and cognition as immortality of his personality. The static optimism of Epicurus is based on the negation of immortality, life being restricted by the limits of individual life which includes neither prognostication nor retrospection: individual life is closed and isolated from the world that is infinite in space and in time. Modern dynamic optimism (it is not only logically but psychologically opposed to the fear of death) does not isolate individual life but fills it with retrospection and prognostication, extends the creative component of individual life, includes retrospection in it and, which is most important, prognosis, uniting prognosis, the future, the unrestricted, infinite future with the aim of individual existence.

This tendency, as has already been said, is apparent in the philosophy of the Renaissance. For Giordano Bruno the individual was a reflection of the infinite whole. In the 17th century, Spinoza added a very important new element to this optimistic conception. He tsays: "And therefore he [the free man---Ed.] thinks of nothing less than of death, but his wisdom is a meditation of life."* Freedom becomes a necessary component of existence, and it is freedom that liberates Man from the fear of death. For Spinoza the infinite world is not a threat to the mortal: the content of mortal life reflects an infinite process.

The behaviour of Man, like that of every particle, is not the compulsory result of external influences, but the revealing of an internal, immanent essence, and that is what Spinoza understands by freedom. The immanent essence itself reflects the harmony of the whole, the cosmic harmony of the infinite world. Spinoza does not recognise the reverse process---the action of the finite, the individual, the limited, the mortal on the surrounding infinite world. The idea not only of infinite knowledge of the world but also of the infinite transformation of the world

goes beyond the framework of 17th century philosophy and, further, beyond the framework of the classical philosophy of the 18th and 19th centuries.

Now we will discuss the role of non-classical philosophy in developing a new, active revolutionary and positive conception of freedom.

Spinoza's "free man" is not opposed to Nature. Spinoza's world---a causal, determined world---includes Man, and the Man is free only because his fully determined behaviour follows from an immanent essence, reflecting the causal world as a whole. Non-classical science also pictures a causal world but this world is governed by the specific laws of the microcosm, which result in the violation of macroscopic laws. But unlike Epicurus' atomistics, the microscopic process brings about macroscopic consequences. The image of the macroscopic chain reaction triggered off by a microscopic process is just as legitimate as an analogy for the modern conception of the individual's freedom, as were spontaneous atomic deviations viewed by Epicurus and Lucretius as analogy and physical guarantee of Man's freedom from fatalism, from "the power of the physicists". Here too, as in Epicurus, it is not just an analogy: modern non-classical science with its practically continuous radical transformation of the foundations of Man's activity creates the possibility of individual existence affecting macroscopic existence as a whole. That is transition to the most important basis of Man's optimistic philosophy, to his expedient activity, his purposive transformation of the world.

The transformation of the world transforms Man's consciousness, psychologically eliminating, not just logically discrediting, the fear of death alongside the fear of infinity, of the infinite spatial and temporal vacuum surrounding the "here and now". Let us return to that which was said at the end of the previous essay---to the local conception of immortality and the local removal of the fear of death. Physics, astrophysics and cosmology allow today not only to concretise Riemann's idea of infinity as a local metrical definition, but to impart to it a greater ability to serve as an analogy, i.e., to be applied in other fields, to explain correlations that are different in their nature. As

B. Spinoza's Ethics, London, 1922, p. 187.

58

PHILOSOPHY OF OPTIMISM

an analogy applied to the problem of immortality, Riemann's conception can explain something essential.

First of all it should be pointed out that Riemann contrasts infinity as a local metric property to unrestrictedness as a property of extension. Unrestrictedness is postulated in all cases, whereas infinity is inherent in space that has a permanent zero or negative curvature. If we ascribe to space the same positive curvature at each point, then space is finite: by unlimitedly prolonging the shortest lines in such space we obtain a sphere.

An essential premise of Riemann's conception is this: infinity and finity become ilocal definitions when we unlimitedly prolong the local geometric correlations, ascribing unrestrictedness to them. Unrestrictedness itself assumes the nature of a local property. We ascribe the ability of unlimited expansion to that which is centred here and now.

If this scheme is to be used as an analogy explaining the problem of death and immortality, then immortality corresponds to unlimitedness. The definition of life, the definition of its given ``here-now'' embraces fundamental negation of absolute limits, limits of knowledge and limits of the transformation of the world. Thus. epistemologicaJ optimism (it is not only epistemological, including as it does the prospect of unlimited transformation of the world) becomes the basis of actual elimination of the darkest and seemingly most fundamental and inevitable spectres of non-existence.

When each local element, each "here and now" of human existence is complemented by merging with something broader and fundamentally unlimited, then Feuerbach's words are realised: "Each second you drink the cup of immortality which replenishes itself like the goblet of Oberon." This somewhat romantic image of unexpected approximation of the geometric conception of Oberon's goblet with the lines from Riemann's famous ispeech is characteristic of modern non-classical thought. Its generalisations, including the most fundamental ones, persist, as it were, throughout the evolution of science and civilisation as a whole, which is inseparable from the former. Each creative act involves, as its necessary content, the

PART ONE. EPISTEMOLOGICAL OPTIMISM

59

overstepping the local limits, the transition to a fundamentally unlimited whole, "Oberon's goblet". Part Two of this book will deal with some specific links of modern scientific and technological progress characterised by similar ex teriorisation, extension, appealing to the most fundamental principles and changing them. Such appealing and such transition correspond to the inner perfection of scientific conceptions. Modern scientific and technological progress is distinguished by the fact that this criterion comes to be used in applied research, too. Not only episodic, paradoxical results of experiments separated from each other by big intervals, but also the practically continuous series of applied results are connected with the fundamental tendencies of the changing picture of the world. This reinforces the dynamics of modern life and fills the consciousness of people with aspirations broadening the "here and now". Modern optimism is contained in similar aspirations that bear an active character and correspond to an extension and generalisation of that which was termed noozones.

Fear of death is not a perception of forthcoming nonexistence. There can be no such perception, and in this respect Epicurus' formula is irreproachable. Fear of death is a perception of mortality, a perception of the transient, insignificant character of the "here and now" as compared with the surrounding vacuum. Fear of death assumes this character, when it develops into a pessimistic evaluation of existence, a pessimistic philosophy, as was, for instance, in Pascal. It assumes this form when it is eliminated by Man's active intervention in universal processes, in Nature and history.

Classical science, as was already pointed out at the beginning of this essay, coped with the annihilation of the "here and now", by examining it in motion. The present is a zero duration limit between the no-longer existent past and the not-yet existent future. But the differential calculus and the differential conception of motion ascribed speed and acceleration to the ``now'' object of zero duration. In the 17th century differential predicates replaced "the world soul" (in part they specified, transformed and deprived it of its mystical form), which in the previous cen-

60

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

61

tury, Bruno, embodied in finite objects, saving them from annihilation in face of infinity. Hence the connection of 17th century dynamics with optimistic tendencies in Galileo's world-outlook.

Now all this is being repeated on a different level, in terms of the active transformation of the world, in terms of motion encoded as goal in the "here and now". The realisation of the goal, the exteriorisation of the "here and now" endows it with existence liberating consciousness from the pessimistic, in its proper sense, and hopeless horror of the inevitable non-being. But, unlike Epicurus' formula, which encloses Man's interests within his individual existence (rather, unlike the effect Epicurus ascribed to his formula), exteriorisation does not free consciousness, from sadness. Consciousness transforming the world, is filled with the general, with that which is immortal, with that which is projected into the future. But individual life is not reduced to the realisation of the general, it is unique. The sadness caused by its cessation contains the realisation of uniqueness and sovereignty of individual existence.

ject. Naturally, it is easy to deduce therefrom Hegel's freedom as cognised necessity.

Experimental knowledge of the world unites criteria of freedom and necessity. The experiment proceeds from the necessity of the predicted result, from prognosis, from visualising of the inevitable effect of changes that are consciously introduced into the natural processes. But these changes are free in the sense that they are to liberate the essence from the inessential, to reveal the progress of phenomena expressing the essence, to penetrate the latter, i.e., to demonstrate the freedom reigning in the Universe, freedom in the Spinozian sense, the freedom of revealing the essence.

That which in experiment features as forecast becomes a goal in production labour. If a given experiment repeats a similar one that was undertaken earlier and brought about a credible result, then it does not mean that the experiment resolves a knowable, or a purely knowable task; it means that the processes occurring in the same given conditions do not possess a knowable value, but an immediate one, and they are to be evaluated not as an example proving certain common regularities, but by their content, irrespective of their commonness. The value of the result will not increase, if a multitude of similar experiments are added to the experiment. If, on the other hand, a multitude of identical production acts are added to a production act, the value of the result will increase proportionally to the number of such acts. In the first case the result was information about a common regularity, whose value does not increase with repetition. In the second case repetition increases the sum of expediently arranged elements of Nature. In actual fact an absolutely exact repetition of an experiment is practically unattainable, and production acts do not lose a certain cognitive value.

Anyhow, production labour is distinct from experiment by a higher credibility of the result that is known beforehand. If optimism implies a correlation of prognosis and aim, and goal of labour is realised in the process of labour with high credibility, then it may be said that optimism is a highly credible realisation of labour; a characteristic definition of production labour.

LABOUR AND FREEDOM

Labour became an epistemological and general philosophical conception only in the works of Karl Marx, in the second half of the 19th century. The conception of freedom, on the other hand, already discussed in medieval philosophical literature, became a fundamental philosophic conception in the works of Spinoza. But the meaning of this conception changed after the concept of labour entered into the range of basic categories of the teaching about being and knowledge.

In Spinoza the problem of freedom is closely connected with that of essence. If the behaviour of a subject is determined by its essence (just as the nature of the geometric image determines its properties), and not by external impulses, such behaviour embodies the freedom of the sub-

62

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

(>•}

of a stable ideal structure in the spirit of the Aristotelian system of natural places also proved meaningless. Nobody expressed this sensation with such force as Pascal. We have repeatedly mentioned this pessimistic note and now we wish to quote it in the form it assumed in Pascal's Pensees.

``I know not," writes Pascal, "who put me into this world, nor what the world is, nor what I myself am; I am in terrible ignorance of everything; I do not know what my body is, nor my sense, nor my soul, not even that part of me which thinks what I say, which reflects on all and on itself, and knows itself no more than the rest."*

Pascal mourns the finiteness of human existence in time, inevitable death and the finiteness of human existence in infinite space and infinite time: "I see," he continues, "those frightful expanses of the Universe which surround me, and I find myself tied to one corner of this vast expanse, without knowing why I am put in this place rather than another, nor why the short time which is given to me to live, is assigned to me at this point rather than at another point of the whole eternity which was before me or which shall come after me. I see nothing but infinities on all sides which surround me, as an atom and as a shadow which endures only for an instant and returns no more. All I know is that I must soon die, but what I know least is this very death which I cannot escape."**

It was already mentioned in the previous essay that Pascal's is not so much fear of death as fear of the infinity of space and time, fear of the infinite Universe unconcerned with Man and his infinitely short life and infinitely small sensual experience. His is a feeling of being lost in infinity and of the insignificance of life in the face of infinity. The feeling derives not only from the infinity of time that runs on after Man's death, but also from the infinity of past time. 17th century pessimism feared both. This was, and we reiterate again, not even fear, but a gnawing feeling of the impossibility of comprehending in-

The concept of freedom changes accordingly. The initial meaning of this concept was ontological. Freedom is contrasted to necessity, it characterises essential necessity, the dependence of the subject's behaviour on his nature, on his immanent definitions. Then freedom becomes an epistemological concept as well, a cognised necessity, and finally it becomes active freedom, freedom of acting on the world, of purposive influence exerted on the progress of processes in Nature; this influence is real, it brings about results, previously presented as goals, with a high probability, and for all intents and purposes---with a high credibility.

Now we can outline the evolution of optimism in its dependence on the activity of reason that not only reveals the order of ratio in the Universe, negentropy, but also introduces them into nature.

The 16th-17th centuries saw the first transformation of the very essence of optimism as a conclusion drawn from the scientific conception of the world. In the Middle Ages, as has already been pointed out, optimism drew from science, as the main source, the idea of accomplished perfection of the world, of its static, accomplished, immobile ordering. The unshakeable harmony of the Universe, the unshakeable stability of social institutions and norms induced a sensation of reasonable individual existence. Unofficial ``carnival'' culture drew its optimism from the sensual cognisability of the world, from the variety of its multicoloured and unexpected details. Then there appeared a conception of the world without the Aristotelian static scheme of natural places. This world was infinite in Bruno, and Galileo shifted the emphasis to the infinite complexity of the problem, to the existence of infinitely small elements of the Universe. The transition of the Rennaissance to the Baroque culture was connected with the idea of infinity being instilled in Man's consciousness. It induced a pessimistic sensation of Man being lost in the infinite spaces of the Universe and of the insignificance of his life as compared to the infinite existence of Nature. The meaning of the two poles disappeared: human life in relation to the Universe proved to be an instantaneous, and therefore meaningless flash of consciousness. Infinite being deprived

* Blaise Pascal, Pensees, Paris, 1962, pp. 159-60. ** Ibid.

64

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAI. OPTIMISM

65

finity, even approaching it. The fundamental tendency peculiar to the 16th and 17th centuries of striving to extend rational thought to infinite nature underlies the pessimistic outlook. This is the tragedy of 17th century rationalism, classical rationalism. The quattrocento saw in art, and in art alone, a means to overcome solitude, the nothingness and mortality of Man. Beauty unites Man with the infinite world, personifying infinite existence in the finite and limited. The cinquecento, in the person of Giordano Bruno, felt heroico furore---an emotional and intellectual striving toward the infinite world, toward its rational and comprehensible essence. In the 17th century yet another pessimistic component was added to the feeling of solitude and death. The above-cited lines of Pascal sound above all, the tragic perception of the incomprehensibility of the infinite world to human reason. But this perception expresses not only a pessimistic judgement, but also striving toward the comprehension of the infinite world.

Already Galileo was thinking about the understanding of the infinite world, about the reflection of the infinite world in Man's finite reason. Galileo's theory of knowledge includes the concept of absolute cognition of the world, cognition of mathematical formulae reflecting in the infinitely small the laws, structure and ordering of the infinite world. Extensively Man cognises an infinitely small portion of the world, but intensively, as Galileo put it, his understanding equals the divine; in other words, Man's reason apprehends infinite spaces of the Universe. To cite Galileo's well-known epistemologicail credo: "Extensively, that is, with regard to the multitude of intelligibles, which are infinite, the human understanding is as nothing even if it understands a thousand propositions; for a thousand in relation to infinity is zero. But taking Man's understanding intensively, in so far as this term denotes understanding some proposition perfectly, I say that the human intellect does understand some of them perfectly, and thus in these it has as much absolute certainty as Nature itself has. Of such are the mathematical sciences ailone; that is geometry and arithmetic, in which the Divine intellect indeed knows infinitely more propositions, since it knows all. But with regard to those few which the human intellect does un-

derstand, I believe that its knowledge equals the Divine in objective certainty, for here it succeeds in understanding necessity, beyond which there can be no greater sure-

ness.

How do mathematical sciences overcome the limitations of Man's knowledge, achieving as they do supreme authenticity in the knowledge of Nature?

Galileo's conception brings scientific thinking to a new conception about the connection between the finite and the infinite. Differential calculus and the differential conception of motion consider the finite, limited, individual, particular as something potentially possessing infinite being. The ratio of the infinitesimal increment of the way to the infinitesimal increment of time is the velocity of the particle, i.e., its further existence, contained as eventual at the given point. At the given moment the particle is subject to the differential law. The law characterising infinite existence is embodied in the particle, in its behaviour. In his limited life Man cognises infinity, personality oversteps its limits, being objectified. This process becomes the foundation of the new optimism. Man is inspired with an optimistic evaluation of himself and the Universe as a whole, no longer by approximating the static ideal, but by dynamic influence on the world. For the time being we are not concerned with the transformation of the infinite world, but only with its cognition. The optimism of the 17th-18th centuries is an optimism of knowledge. Philosophers only interpret the world. In the finite, Man cognises the reflection of the infinite world.

Marx in all his teaching: ontology, epistemology and sociology, as well as in his economic conceptions, shows that cognition of the world is inseparable from its transformation. Therefore Man's purposive influence on Nature, i.e., labour, becomes the basis of the new, creative and dynamic optimism. Escape from the world into the realm of pure thinking no longer gives back to the world its ratio

* Galileo Galilei, Dialogue Concerning the Two Chief World Systems---Ptolemaic and Copernican, Berkeley and Los An?eles, 1962, p. 103.

5-01545

66

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAI. OPTIMISM

67

and Man his optimism. Man rationalises the world, increasing the negentropy, creating noozonos in the world, and herein lies his freedom.

Herein also lies the starting point of comprehending the objective world. In Pascal's above-cited pessimistic declaration, fear of non-being is intertwined with fear of infinite being, infinite uncognised being. "I am in terrible ignorance about everything...," says Pascal. It is a very eloquent fusion of conceptions: "terrible ignorance", ignorance as a source of fear of non-being and infinite being. Galileo did not know such fear because he saw intensive and absolute credibility of knowledge. The infinitely small credibly reflects the infinitely great. And this epistemological optimism disperses the pessimistic spectres that surrounded Pascal.

'

modern scientific and technological revolution whose importance for an optimistic world outlook is the subject of this book?

Does contemporary science provide a foundation for gerontological optimism? It fills the goblet of Oberon with the nectar of immortality, but does this nectar dry up, or is the active function of man preserved in old age? Docs the traditional concept of old age change under the conditions created by modern science?

Somewhat anticipating the foregoing, we shaill deal with questions considered in the second and third parts of the book, first of all with problems of molecular biology, transformation of the character of labour due to cybernetics and the application of non-classical science as a whole and, finally, of ecology. It is to be assumed that all this will radically change the very content of old age as a physiological and economic and demographical category.

The concept of old age as a period of degradation and final cessation of Man's activity received an exceedingly acute expression, deeply personal and impersonal at the same time, in 1911 in the well-known decision taken by Paul and Laura Lafargue to depart from life when threatened with a decrease in the active participation in life. Even at that time it could not become a general principle, and it did not claim to: old age by itself never ceased actively to influence the world, because such influence is always based on a certain tradition, invariance, continuing tendency, and requires experience, a great store of accumulated impressions and knowledge, which are the prerogative of old age. But non-classical science promises to introduce radical changes to this problem.

They contradict, to a considerable extent, the conception on which I. I. Mechnikov based his Essays in Optimism. In this conception the fear of death is contrasted to the "instinct of death" which is a natural wish for peace after a long and active life. In Mechnikov's view the fear of death results from the fact that people in most cases do not live to develop such a wish; normal life, orthobiosis. must secure longevity and the "instinct of death''.

5*

THE PROBLEM OF OLD AGE

The creation of noozones, the rise in negentropy and the cognition of the world's objective ratio resulting from this expedient activity increasingly become the content of labour. Labour is engaged in a fundamental transformation of natural processes and their expedient arrangement, from changes in the positions of the physical objects to changes in velocities, forms of energy, frequencies of such changes, vibration frequency of field variables, to changes of mass and even rest mass. Accordingly, ever more fundamental and general principles are changing in close connection with the said evolution of the picture of the world. We have already seen that labour and consciousness filled with such dynamic tasks discredit and drive away pessimistic shadows from Man. We spoke of death and the fear of death. Now we propose to touch upon the fatal spectre of the lengthy degradation preceding death and leading to death, a degradation of the physical and spiritual forces of Man. What changes have been wrought here by modern non-classical science and

68

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

69

But the "instinct of death" obviously involves a gradual decline of the interest in life, the temperament of intervention in life and the potential for its transformation. The tendencies of modern civilisation permit to predict not the asymptotic approach of such interest, temperament and potential to the zero line, but their increase and the transformation of death not into wished-for peace ("instinct of death"), but into something hostile to Man, into an adversary to be fought down by society considering maximum prolongation of life as an essential object of its labour and intellectual efforts.

How are these tendencies related to the non-classical character of modern science?

In modern gerontology the idea is sometimes expressed that old age degradation is encoded in the molecule structure of living matter. This being the case, science apparently advances to a real possibility of affecting the hereditary code. It should be emphasised that such a possibility involves essentially non-classical processes. For instance, radiation genetics includes the use of radiation stimuli whose nature is revealed in the light of quantum physics. The demarcation of classical and quantum components of molecular biology will be discussed in the essay on Molecular Biology. But it should be pointed out even now that there exists a characteristic relationship between dynamic, transformative, active optimism and non-classical conceptions.

This relationship is seen most distinctly in the elimination of a number of diseases that shorten Man's life and reduce his capacity for work. Even more clearly is it to be seen in analysing the general economic effect of science, in defining the scientific basis for the rise of the consumption level observed at present and projected for the end of this century. Less distinct is the connection of modern science with the rationalisation and amelioration of the ecological conditions. First to be solved now is the negative side of the problem, the need for protecting forests, water reservoirs and the air from pollution. But this is only a part, only the beginning of the radical rationalisation of Man's ecological environment as a condition

of the radical increase in the duration and fulness of life.

The two terms---duration and fulness, extensive and intensive increase in human life---characterise the change in the character and content of labour. As has already been said (and as will be discussed in greater detail in the second and the third parts of this book), the application of non-classical science signifies the transition of labour to new, increasingly dynamic, general and fundamental functions, reconstructing production. Such an evolution of labour is inseparable from the evolution of science in which increasingly more fundamental principles become plastic, variable and dependent on experimental and industrial experience. This evolution is somewhat analogous to the turns in the development of science already dealt with: the changed conception of the ratio of the world, the perception as world harmony not of permanent positions (Aristotle), but of permanent velocities (Galileo's Dialogue), accelerations (Galileo's Discourses), mass (Newton's Principia), rest mass, etc. In the content of labour, an analogous transition into a new invariant, to a new ordering identity, is inseparable from the statement of the violation of the old invariant, the old identity. In modern nonclassical science and in modern production embodying science such transition is becoming practically continuous, this continuity being the source of their specific effect on the character and role of "old age" in modern civilisation.

The words "old age" have been put in quotes not because old age is disappearing---this does not happen---but because the concept of old age, its character and role are radically changing. It is natural that distribution of functions should be between coexisting and collaborating generations, when the ``fathers'' preserve the existing order, and the ``sons'' emerge as the bearers of the new, of that which violates the tradition. The conflicts between ``fathers'' and ``sons'' usually expressed the break between the two components of labour and knowledge, i.e., between maintenance of the tradition and its transformation. Such a break was the basis both of the traditionalism of old age and the nihilism of youth. Real scientific, technical and economic progress was based on both these components: practice and

70

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOI.OGICAL OPTIMISM

71

experience prepared the transition to new general conceptions, but at the same time their results could be neither found nor formulated nor applied without having recourse to certain established general categories. In classical science and production embodying these general categories such a recourse could abide by the old conceptions during lengthy periods---hence the illusion of their a priori character, the a priori adherence to the already established, and the nihilistic refutation of the already established. The epistemological basis of these conflicts was the quasi-static character of the scientific conceptions. Within the framework of the dialectical world-outlook, the understanding and generalising of the fundamental shifts in cognition and practical life, did not involve either illu,- sions of an a priori immobile picture of the world, or a break between the new and the old in science and economy resulting from these illusions.

The role of the older generation in the life of society greatly depended on the relationship of these components of knowledge and transformation of the world, that had merged and become complementary. Initially, the practical experience and empiric registration of events and regularities did not constitute stable meaningful series. In those times the preservation of traditions did not become a peculiar distinct function, and old men who had not become chiefs, were left without food, they were killed and sometimes even eaten. Then certain stable empirical knowledge and rules were found and consolidated by tradition and custom. They seemed sacred and their custodians, possessing the greatest life experience, became chiefs. Even later power, influence and active effect on ilife and labour were, to a certain extent, related to age. The transformation of industry into applied natural science, the replacement of tradition by science, comparatively high dynamism, the high tempo of technical progress, essentially changed the social weight of the age groups. But we are interested here in the corresponding effect of non-classical science and the modern scientific and technological revolution.

In non-classical science empirical experience, the external confirmation, "the advancement of reason" are insepa-

rable from logical constructions, inner perfection, "the penetration of reason into itself". Gradual accumulation of empirical data and their subsequent logical generalisation are no longer characteristic of science, the transformation of general constructions more often accompanies experiment and even merges with it. But this philogenetic peculiarity of modern science is characteristic of ontogenesis, of the creative way of a scientist. Also characteristic of it is another peculiarity of modern science: developing a new principle no longer means finding new "external confirmations" of the invariable scheme, they are accompanied by the transformation of this scheme. Consequently, non-- classical science is not characterised by a burst of theoretical thought at the beginning of the creative path, later on to be replaced by a peaceful development of the established principle.

The break between rather invariable general principles on the one hand, and changing empirical data and particular generalisations, on the other, characteristic of classical science, signifies a certain break and a certain illusion of independence of the two components of knowledge---- identity and non-identity. The presumption of identity permits to apply to new phenomena the relatively immobile concepts and norms, established in the past. Such extrapolation seems to be the prerogative of old age. Non-identity, irreducibility, specificity of the new. revolt against the identifying experience crystallised in these norms. It seems to be a prerogative of youth to state the specificity of the new. But already in classical times, taken in historical perspective, such a distribution of functions proves an illusion, a regular illusion, but an illusion, nevertheless. Non-classical science and the experience involved in its application leave no grounds for a similar illusion. The new experience makes it imperative immediately to change, modify, generalise, concretise the general principles. The classical, and rather illusory division of labour between the generations loses its meaning.

In my book about Einstein I made an attempt to consider from this point of view the modern ontogenesis of scientific theory, recalling in this connection the contrasting confrontation of old age and youth in the treatise writ-

72

PHILOSOPHY OF OPTIMISM

PART ONE. EPISTEMOLOGICAL OPTIMISM

73

ten by Longinus at the beginning of our era, that analysed from this standpoint the difference between the Iliad and Odyssey. Longinus ascribed the Iliad with its heat of passion to the young Homer, and the Odyssey permeated with quiet thought to the poet in his old age (the Odyssey, in Longinus' words, suggests the Sun about to set: it preserves its colossal dimensions, but it no longer blazes. ..). If the explosion of constructive thought is associated with the Sun at its zenith, with youthful passion and temperament,^^1^^ and the peaceful development of a new principle, with the Odyssey, with the setting sun, then such an analogy does not hold for contemporary scientific creativity.

Accordingly, production combines the development of technical principles (once it was possible to say "peaceful development".. .) with the revolutionary transformation oi these principles.

On the whole, non-classical science and its application approach those characteristic features of creativity that were associated with the stages of aging. The concept of ``acme'' (this is the term the Greeks used to designate the highest efflorescence of Man's creative powers) changes, it is no longer a peak on the graph, but a curve extended along the time axis. It reaches a maximum comparatively early and retains a maximal value until death or nearly until death. For this reason, the struggle for longevity as a struggle for improving the living conditions (in particular, for ameliorating the ecological environment) and for increasing the efficiency of medicine conforms to the requirements and possibilities of modern science and production. The increasing of average longevity means a radical decrease in the number of disabled, a radical lengthening of maximum creative capacity for work.

Thus, gerontological optimism is closely linked with epistemological, scientific, technical and economic optimism.

However, one should not think that gerontological problems are derived from economic ones. Man, the subject of labour, and his interests are the goal, the starting point that determines the plans for the remoulding of the character, the tools and the objects of labour. Man's interests lie in the extensive and intensive increase in life expec-

tancy, which will lengthen life, filling it maximally with the active transformation of the world. Part Two of the book will treat the objective tendencies of scientific progress, and Part Three, the specific problem of optimism, the relation between the goal of labour, production, science and the objective possibilities created by non-classical science.

PART TWO SCIENCE IN THE YEAR 2000

PART TWO. SCIENCE IN THE YEAR 2000

75

economic and social effect of modern scientific trends once they are implemented. Yet, in the absence of such prognoses, it is well-nigh impossible to say what these modern tendencies consist in. We can name a particle, determine its type, and---if we visualise its eventual destiny---its track. Likewise it is only by dint of scientific hypotheses, scientific and technical forecasts and economic projecting that we can ascertain the tendencies of the scientific and technical progress, name them and comprehend their meaning.

The primary and basic goal of modern economic, scientific and technological forecasting is to determine, in making the right decision, the economic value of all possible variants now available. Thus, it should be most emphatically stressed that we are dealing, in fact, not with the year 2000, but with the current year. The following example will illustrate the vital character of such forecasting. Let us assume that in planning a new plant, mine, power station, railway line, port, etc., a depreciation period will necessarily be fixed for a tool, assembly unit or the whole enterprise, for that matter. Under scientific and technological revolution the prospect of moral depreciation may gain priority over the prospect of the physical wear of a tool, or exhaustion of a deposit when planning a mine. Difficult though it may be to predict the appearance of a machine or production process with greater competitive power than the ones currently planned, these estimates, however conjectural, are absolutely indispensable under scientific and technological revolution. Similarly, under scientific and technological revolution these estimates are connected with scientific prognoses, which are even more hypothetical than technological ones for they forecast radical changes, i.e., changes not only in design and technology, but likewise in the ideal physical cycles which are embodied in one way or another in the designs and technological methods currently used.

But that is not all. At present, the value of a scientific principle, design or technological process is measured nol so much by its foreseeable or already identifiable economic effect, its technical level, as by its effect on the rate ol scientific, technological and economic progress. How docs

WHY THE YEAR 2000?

Can this date---the year 2000---be deduced from certain definitions of modern science or from the character of its trends?

'

Before answering this question, it must be emphasised that there exists reverse correlation: the very definition of the modern trends demands that a forecast be made, a picture be drawn of the development of science in the coming decades.

Herein the following analogy will be appropriate. Let us imagine a physical experiment in which new elementary particles are generated. The reaction leading to the emergence of particles takes but little time, say of the order of 10~^^22^^ sec. However, to define what particles are generated, what is their mass, charge and life time, it is necessary to know how each particle will eventually behave, how it will move, how its path will curve in a given magnetic or electric field, how long its track will be before it disintegrates, which will put an end to its existence. Only such knowledge about the further fate of the particle gives physical meaning to the problem of the particle belonging to this or that type, its charge, mass and life time.

The description of modern scientific progress is similar to the determination of the eventual destiny of the particle and the definition of its type. It is very hard now to determine the nature of the tendencies arising in science. It is harder still to define the technical effect of these tendencies---the results they will yield upon their implementation. The hardest task, though, is to determine the

76

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

77

a discovery, an invention, a new scheme, a new design, new technology, affect the speed and acceleration of progress? This problem is now of no lesser importance---:and at times of greater importance---than the one concerning their influence on the level of science or economics. Ours is an epoch of differential indices, differential criteria. This point will be dealt with in greater detail in Part Three of this book. Here we merely wish to emphasise the need for forecasting to determine the differential indices.

The rate of a process, its speed and acceleration, in other words, a derivative of changing value x with respect to time, is determined, as is known, by ratio Ax to the time increment At when the latter is contracted to a moment, approximating zero. This is the way speed is defined, and the operation is repeated to define acceleration. Prognosis constitutes an increment which we must discover in order to give a dynamic description of the given moment in science, technology and economics. It is a tangent, as it were, drawn from the given point to the curve, indicating the direction of the curve.

The curve, generally speaking, does not coincide with the tangent, and remains a curve. But in the absence of a tangent it is impossible to define the local direction of the curve.

Prognosis is a tangent to a most considerable extent and in its most important functions, determining as it does, the direction of development, state of motion, dynamics of the present moment, the dynamic value of decision variants to be chosen now, variants of the initial conditions that have a bearing on the subsequent development of science, technology and economics.

But why are we taking several decades as the A£---- increment of time, why have we singled out for forecasting the remaining quarter of our century, why do we wish to discover the course of science, technology and economics within the coming twenty years or so? How is this date,--- the year 2000, deduced? Cannot the lines characterising modern tendencies be extended isay, over a span of a hundred years, two hundred years or perhaps longer? On the other hand, will not short-term forecasts covering periods of three, five, ten years be more indicative in other cases?

It will be understood that the year 2000 is an arbitrary date. However, it is not entirely arbitrary, since it indicates the order of the magnitude of a term which will see the modern tendencies of the scientific and technological progress realised. Perhaps such realisation will take not thirty but twenty or forty years, yet it involves a definite order of the magnitude of the term. But that is not all. The date 2000 conceals in itself the idea of a single complex of interrelated changes of their common integral realisation timed to a certain date identical for all branches and all ways of progress.

What does this comlpex consist of?

Part Two of the book is to give an answer to this question and the introductory essay of this part confines itself to a most general preliminary answer: The promises of non-classical physics will be realised within a period of time which is measured by several decades and which we provisionally identify with the end of our century.

What are these promises?

Non-classical science promises prognoses for the further development of atomic power, quantum electronics, molecular biology. First, emphasis should be placed on the most characteristic common epistemological feature of the modern stage of science that has called to life the above-mentioned trends of the scientific and technological progress. This feature which determines the character and content of present prognoses, is the relationship between concrete scientific and technological discoveries coupled with the re-evaluation of the most fundamental principles of science and the implementation of the new physical ideas that were formulated in the first half of the century. The beginning of this century was marked by a most radical re-evaluation of the classical foundations of science and, which is probably more important, a rejection of the very presumption of an immobile basis of the developing concepts of the world. It may be inferred from a concrete analysis of modern tendencies of science that this century will, in all probability, end in a full industrial and technological implementation of those new physical ideas whose emergence crowned the beginning of the century. It is to be assumed that within several decades, i.e., a period

78

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

79

which, as has already been said, we identify somewhat provisionally, but for good reasons, with the last quarter of the century, a new scientific basis of production and new applied natural science will be brought into existence.

Some explanations are due here. The 17th century saw the emergence of classical science, called so because the basic laws of Nature discovered by Galileo, Descartes and Newton, followed by a succession of great thinkers of the 18th and 19th centuries, claimed to be ultimate truths which will forever remain invariable canons of scientific thought just as the architectural and sculptural masterpieces of classical antiquity became canons for artistic creativity.

Classical physics and primarily the laws of mechanics set forth by Newton in Philasophiae naturalis principia mathematica, had certain grounds to claim the status of eternal truths. Beginning with Newton science has developed accepting everything experimentally established and verified, generalising old laws to define them more precisely, rinding new areas for their application to demonstrate how these laws are modified in the new areas. Classical science, however, laid claims to something greater. Most thinkers of the 18th- 19th centuries believed that Newton's laws of mechanics constituted a stable basis of science. Classical science is not just a set of certain axioms (such as the independence of body's mass of its momentum, or the continuity of energy, the possibility of infinitely small increments of energy) but also a conviction that these are really axioms. It is not even a question of subjective conviction. The concepts of classical science essentially do not require any other or contradictory assumptions for their comprehension.

What is non-classical physics? It is sometimes defined in a purely negative way: it is /zon-classical physics generally repudiating the fundamental postulates that classical physics proceeds from. In 1900, Max Planck suggested that energy is emitted in minimal portions called quanta. Several years later Einstein demonstrated that relativity of space, time and motion (these concepts were opposed to Newton's absolute space, time and motion), leads to a correlation between the body's mass and velocity, and conse-

quently the energy of its motion; when speed approaches its limit---300,000 km per second, the mass of the body tends to infinity. Einstein postulated, further, that a body's rest mass m is proportional to its internal energy E; if mass and energy are measured in conventional units, the energy is equal to the rest mass multiplied by the square of the velocity of light c. Thus, E = mc^^2^^.

In the 1920s an even more paradoxical non-classical "heory appeared---quantum mechanics. Niels Bohr and Werner Heisenberg demonstrated that a particle in motion, generally speaking, has no definite position or velocity in space at a given moment. These new correlations inherent in processes far removed from everyday experience had an unexpected impact on the general public. It might have seemed that a body moving at a speed comparable to that of light (considered in the theory of relativity) would evoke no emotions with people not engaged in theoretical physics. In like manner they should not have been concerned about the fate of an electron passing through an opening comparable to the size of an electron. Nor should the general public, obviously, have been impressed by the purely mental, practically unrealisable experiment to demonstrate that passage through an opening changes the electron's velocity, rendering it indeterminate. Nevertheless, the impact was unprecedented. Quantum mechanics as well as the theory of relativity aroused not only widespread interest, but brought about a great change in the mode of thinking about Nature. A similar change in the minds was probably caused by the disappearance of the absolute ``up'' and ``down'' concepts in ancient times when the idea of the Earth being a sphere was accepted. In much the same way minds were confused by the astronomy of the 16th-17th centuries that put an end to the idea of an immobile centre of the Universe. Not only did the concept of the fundamental laws of Nature change, but the very concept of science was changed too. The theory of relativity and later quantum mechanics did not just replace old fundamental laws by new ones. These new laws no longer laid claim to an ultimate solution of the primary problems of existence. In the 19th century, Hermann Helmholtz saw the supreme and ultimate goal of science

80

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

81

in reducing the picture of the world to central forces fully subjected to Newton's mechanics. A modern physicist does not intend to substitute this goal by another or by an ultimate aim. These Victorian illusions have been given up for good. Non-classical physics is like a building which not only grows in height, but reaches down in search of an ever deeper but never the last foundation. In this respect, Man's reason not only saw a new Universe, but it saw itself in a new light.

The effect of non-classical physics was not only negative. Mankind felt by intuition that it was entering an epoch of greater dynamism, that science was indubitably bringing about deep, though still vague, changes in the life of people, that not only life itself but scientific concepts, and potentialities of science would undergo a continubus change, and the impact of science on life would continually influence the material and spiritual forces of mankind.

Those who remember the impression first made by the theory of relativity and quantum mechanics on public mentality can testify to the optimistic character of this effect. The 1920s saw a radical reassessment of values, with stability, recurrence, constancy losing their Victorian optimistic nimbus. Optimism was ever more associated with transformations. It is but natural that the discrediting of immobility and apotheosis of motion should be only approximate features, requiring elaboration and addition of statements that contradict them. Naturally, the roots of the reassessment of values mentioned earlier were far deeper than the effect of non-classical science. Perhaps, the latter was not even one of the roots, the psychological effect of science merely coincided with the dominant changes in public mentality. That was one of the reasons for the intense interest in new science, so peculiar to the twenties.

In the middle of the century intuitive insight turned into distinct prediction. Now we can, to some extent, determine the effect of non-classical physics, of its basic feature---the incomplete and open character of new concepts of the world and the inevitable revision of the basic principles of science. Now let us consider the effect of non-classical physics at the present time.

Classical physics also caused both the scientific concepts and the impact of science on the material and spiritual forces of humanity to wax dynamic, mobile and changeable. But that was a dynamism of a different, lower calibre, for only specific scientific notions underwent certain changes, while the fundamental principles remained intact, The change of selected scientific concepts brought in its wake first a sporadic, and at the end of the classical period (the beginning of the 20th century), an uninterrupted change in the technological level of production. Beginning with the industrial revolution of the 18th century, industry has turned into an applied natural science. Technological progress sporadically or continuously makes use of the schemes of classical science, treating them as ideal cycles to be attained by industrial technology. The entire history of classical thermodynamics is one of the gradual approach to the ideal Carnot cycle, to the ideal physical scheme of heat flowing from warmer to cooler bodies, with heat transformed into mechanical power in the course of such transition. The ideal physical schemes themselves never remained stable, always being supplemented by new ones. Science discovered new laws governing conservation, entropy, molecular structure, evolution of organic and inorganic nature, with the number of schemes that served as goals for industrial practice ever increasing. The main objective of 18th century power engineering was to conserve mechanical energy in the transformation of the available potential (e.g., water flowing into the buckets of an overshot wheel), or available kinetic energy (water moving the vanes of an undershot wheel) into mechanical rotation of machines whose forerunners were the spinning looms that heralded in the industrial revolution. In the 19th century (or rather in the period from the end of the 18th and nearly through the whole of the 19th century) the main objective of power engineering came to be the conservation of energy in the transformation of heat into mechanical power. Increased efficiency of thermal equipment signified advance towards this objective. At the end of the 19th century, when the transformation of mechanical power into electricity, and the transformation of the latter into mechanical power (which was deduced from the basic equations

6-01545

82

PHILOSOPHY 01- OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

83

of electrodynamics and Maxwell's equations) became known to science, it set a new objective to technical progress, with power engineering seeking to carry into effect the following scheme: the movement of the conductor in a magnetic field generates an electric current, and the latter causes the conductor at a considerable distance in the magnetic field to revolve. This scheme, implemented as a centralised system of electric supply, is the main target of electrification.

Below we consider in greater detail electrification as the implementation of classical electrodynamics. But before that we wish to remark on the implementation of classical physics in the whole evolution of power engineering up to the middle of our century.

Classical science deals with discrete particles of matter} viz., macroscopic bodies, molecules and atoms. The energy of all these moving bodies owes its existence on the Earth to solar radiation. The Sun creates all classical sources of energy used in industry. Sunrays make water molecules move upwards, causing wind and air pressure overfall, transforming molecules of organic matter in the absorption of light by chlorophyll, i.e., stocking energy as fuel. Thus classical power engineering remains within the limits of processes occurring in the immutable solar system. In anticipation we should like to point out that new power engineering or the embodiment of non-classical science, is based on processes involving the emergence of atomic nuclei, as well as processes of creation and destruction of galaxies.

Classical power engineering, too, for that matter, was based on processes which have now been given a non-- classical explanation, including processes causing Sun radiation, accumulation of its energy in chlorophyll and even generation and propagation of current in conductors. The word ``based'', however, has a different meaning here: classical power engineering could develop although the non-classical nature of these processes was not yet detected. New power engineering, on the contrary, essentially depends on such discovery.

But let us return to electrification. It comprised the use of classical sources of energy in unified systems of generat-

ing points and consumers of electric power connected by high voltage transmission. Yet, this was only the first stage in electrification, and it caused reverberations in technology, in the raw material basis of industry, in the character of labour, in culture and science.

In technology, the unification of energy generation resulted in wide industrial! application of electrolysis. Production processes and technological methods requiring considerable electricity consumption became more economical as electrification made greater use of water power and cheap local fuel. Artificial nitrogen fertilisers could now be produced on a scale unprecedented in the past, which immediately raised the productivity of agriculture. Further, electrification opened the way to methods of production of light metalls and special steels, requiring high energy consumption. The metal basis of industry changed, bringing about a corresponding change in the raw material basis. Now there arose a need for rare metals and elements, in general, which were known to chemistry but utterly unknown to technology. Dozens of elements of the Mendeleyev Periodic Table became new industrial raw materials.

Electrification changed the character of labour. The flexible electric drive made it possible for machines to replace workers in more complex operations. Electric motors, heavy-duty or small, powered numerous mechanisms which processed machine parts, moving and transferring them from one automated machine tool to another. There appeared servomotors that do not process the parts, but control other motors, resetting operating conditions, varying cutting tool angles, changing the travel of automatic transfer lines, etc. Electropowered production with its automated lines is monitored from a control console with gauges to indicate ispeed, voltage, temperature, raw materials input, product output, as well as push buttons and control levers to operate a complex unit or system of aggregate units.

The general economic effect of electrification amounted to the following:

The application of electric power in production processes, utilisation of new kinds of raw materials and indus-

6*

84

PHILOSOPHY OF OPTIMISM

trial automation actually became continuous processes. Not a single week passed without a new part, a new arrangement, a new formula, new operations, new parameters appearing in some design bureau, laboratory, or workshop. Accordingly, technical progress advanced continuously, and the growth of social labour productivity likewise became practically uninterrupted.

As is known, continuous changes in values can be presented, certain mathematical subtleties aside, in terms of time derivatives. The first time derivative of a given point is its velocity, the second derivative is acceleration. The economic effect of electrification may be stated as follows: In electrification the first time derivative of labour productivity becomes positive, it is greater than zero and, having a certain velocity, it rises continuously.

In 1920, a plan for the electrification of Soviet Russia was elaborated. It provided for an immediate programme of constructing electric power plants as well as for a farreaching programme of uniting in a single power grid of all power plants in the European part of the country, with an increase in capacity. The plan laid down guidelines for the course and scale of industrial electrification, the use of electric power to mechanise industrial production and in new production processes; it envisaged, on a long-term basis, the electrification of transport and agriculture and mapped out the future development of the main industries to be reconstructed on the basis of electricity. On the whole, it constituted an integral complex of industrial transformations timed to approximately the same date including the creation of a high-tension network linking large power plants, mechanisation of industry, shifts in the character of labour, the development of industrial branches requiring great energy consumption, and changes in the sources of raw materials.

Now it will be easier to understand wherein lies the effect of non-classical physics.

First and foremost, it (lies in the building of a new power basis for production. In this case, the word new stands for something radical, a rather general physical principle. The technological revolution caused by the machine-tools of the 18th century was the general physical principle of

PART TWO. SCIENCE IN THE YEAR 2000

85

the Newtonian law of forces, the proportional acceleration of a body subjected to a force with a constant of proportionality equalling the mass of the body. The fundamentals of thermodynamics were the general principle of the revolution brought about by thermal machines in the 18th and 19th centuries. The laws of electrodynamics, Maxwell's equations describing the relationship between the magnetic field and the electrical field and implemented in the transformer, generator and electric motor were the general principles at the root of the revolution caused by electricity. The general principle governing atomic power engineering which defines the ideals and ways of research and the subsequent application of its results, is the relativist relationship between the mass of the nucleus and the energy of the coupling of nuclear particles. None of these formulas, of course, is in any way opposed to the others: if a modern atomic reactor (using a small but already substantial share of energy calculated with the help of Einstein's formula), generates heat, the subsequent calculation of the use of this heat relies on classical thermodynamics and electrodynamics, whereas calculations of mechanical processes in the atomic reactor depend on classical dynamics. Now, however, we do not judge about the evolution of power base by measuring the successive dynamic use of the calorific power of burning fuel in the direct classical meaning of the word, (i.e., combined with oxygen), nor do we regard energy as stored by the Sun in a molecule of organic matter. We now measure the use of the inner energy of the nucleus, stored there when the nucleus was created as a result of processes occurring in very small spatial and temporal domains, that are, however, associated with the cosmic evolution of stars and possibly galaxies.

The stage in scientific and technological progress associated with atomic energy will by no means end in complete utilisation of the relativist energy E = mc^^2^^, in the same way as the consummation of the revolution caused by steam did not mean full utilisation of the calorific power of coal. The revolution produced by steam power was completed when coal became the main component of the energy balance, when industry migrated en masse from rivers

86

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

87

and water wheels built on their banks to coal-fields, when there appeared steam-powered means of transport and the classical industrial centres. In a similar fashion, the revolution accomplished by electricity did not end in full utilisation of the classical energy sources. Its completion (certainly very relative, retaining the prospect of further development of power plants, networks, industrial electrical equipment and methods of technological application of electricity) meant the creation of big interregional power grids, wide automation, and electric-power based technology and as a result, continuous technical progress, in other words, a non-zero derivative of the level of technology and productivity of labour with respect to time.

With regard to the revolution brought about by atojnic energy (to continue the analogy with electrification as implementation of classical physics), one may consider a certain complex of interrelated shifts in technology, culture and science, the character of labour and raw materials base, as the content of this paticular period and the transformation of atomic power plants into the predominant source of electric energy production, automation based on electronic computers and controlling machines, and industry free from the threat of depletion of resources, as the completion of the, process described above.

All these results of atomic power engineering (they might also be called resonances, for atomic power engineering only intensifies the inner tendencies of electronics and cybernetics) lead to a continuous acceleration of technological progress. The development of atomic power engineering no longer presents a series of constructions ever closer approaching the ideal physical scheme, it often constitutes a change of the scheme itself. We shall later return to this peculiar feature of atomic power engineering. In analogous fashion, the ``resonance'' processes of application of cybernetics and electronics in technology often change the overall set-up in an entire industry and not only the engineering part of one and the same scheme. Without citing here any examples or giving proof, we shall provisionally formulate the main feature of atomic age economics: the level of technology and the level of labour productivity not only grow but their growth is con-

tinuously accelerated accompanied by an increase in the speed of technological progress and labour productivity. It is not only the first time derivative of labour productivity that becomes positive, but the second derivative is also greater than zero.

This is the main economic effect of atomic energy being converted into the main component of the power balance, of electronics being transformed into the basic means of technology, of work aided by cybernetic mechanisms being changed into the main content of labour.

And what happens next? Can we now map out the outlines of the post-atomic age?

We cannot do that, but what we can do is to indicate with great precision the process which is already now providing for the post-atomic civilisation. A special essay is devoted to this question. We only wish to point out here that we definitely know what the provision for the postatomic civilisation involves, but we are not at all aware of what this provision will result in, what will be the new scientific concepts which will give rise to new post-- atomic power engineering and technology, and a new character of labour.

The way that leads to such new scientific concepts is the study of elementary particles not only of atoms and atomic nuclei, but of those particles which it has so far been impossible and will hardly ever be possible to divide into sub-particles. These include electrons, i.e., particles with a negative electric charge, nucleons, particles contained in atomic nuclei: protons with a positive electric charge, neutrons with no electric charge, and many other particles. The problem is that we can hardly distinguish elementary particles from non-elementary, nor can we so much as point out the factors on which the mass and charge depend, distinguishing one type of elementary particles from another.

There is every reason to believe that these questions can only be solved by an outright rejection of convention concepts, a rejection, possibly, more radical than that of the classical axioms of physics by the relativity theory and quantum mechanics when they were being created,

88

PHILOSOPHY OF OPTIMISM

It might well be that in a decade or two (or within some such period) the most fundamental principles of science will rapidly start to change. Not only concrete scientific schemes will then undergo a change (this is already happening in our days), but the very ideals of science sought by scientists in working out new scientific schemes will become different. Then, not only the acceleration of technological progress may become continuous, but the acceleration itself will continuously increase and become a real positive third time derivative of the level of technology and the level of Man's power over Nature.

Yet, for the time being this third derivative is not a measurable value but only a symbol of the possible economic effect of the fundamental research that is expanding our knowledge of elementary particles. This research probes into very small spatial regions (say, of the radius of an atomic nucleus, i.e., 10~^^13^^ cm) and temporal intervals of the order of 10~^^23^^ sec. This can be achieved with the help of very powerful accelerators of elementary particles. Another additional way is astrophysical research, particularly studies of cosmic rays, that is the flow of high energy particles, coming to the Earth from space.

This is ``disinterested'' research. The quotes, though, do not question the. really disinterested nature of Man's aspiration for solving purely cognitive problems which lead him into cosmic space and the microcosm. Whatever the possible practical results of astrophysical research or of the construction of very powerful particle accelerators, it is not these results, in principle undefinable in advance, that serve as immediate incentives for research. This research is conducted as a macro-economic undertaking primarily pursuing cognitive goals. Man is already aware of the fact that the abstract character of cognitive tasks and the completely undefinable practical results of their fulfilment, conform to the radical character of these results indefinable in advance and, ultimately, to the radical acceleration of economic progress. It is clear that it was due to the exceedingly general, abstract and purely cognitive character of the tasks set at the beginning of the century concerning space, time, motion, ether, mass and

PART TWO. SCIENCE IN THE YEAR 2000

89

energy, that the theory of relativity became the source of such a fundamental practical result as atomic power engineering. Now science is confronted with even more general and basic problems. Attempts will be made to solve them independently of their definable practical results. None the less the quotes in ``disinterested'' are not without meaning: the ``interest'', though not to be known or quantitatively defined in advance, is absolutely indubitable and extremely great.

Is it an economic concept? Is it possible to speak about the economic effect of fundamental studies in the theory of elementary particles?

Apparently, it is time now to generalise the "economic effect" concept including in it not only the productivity of social labour, but also the rate of growth of this index, its acceleration and, possibly, even the acceleration rate. As has already been pointed out time derivatives of labour productivity are implied here: the first derivative (rate of growth), the second derivative (acceleration) and the third derivative (rate of acceleration).

Taking into account the time derivatives, including the third derivative, one may consider the fundamental, ``disinterested'' studies (concerning space and time, their finity and infinity, their continuity and discontinuity, the ``elementariness'' of elementary particles, the nature of their mass, charge, etc.) as links in the chain of Man's economic activity, as something enhancing Man's power over Nature, and increasing the sum of material, intellectual, and esthetic values consumed by him.

Only very simple studies, like checking the quality of raw materials and production, machine-tool speed, steam pressure, voltage and so on, maintain the given level of labour productivity. Design and development increase the productivity of labour, lending it a non-zero growth rate. In point of fact, it is scientific research that guarantees acceleration, the most fundamental research holding promise of an increased growth rate of social labour productivity. Nothing can give a greater impetus to accelerated productivity of social labour, and consequently to civilisation as a whole, than ``disinterested'' research which is really disinterested if the level of labour pro-

90

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

91

ductivity is implied, involving very great objective `` interest'' in terms of acceleration growth of this universal index of civilisation.

There is rather a distinct relationship between the degree of generality, depth and ``disinterestedness'' of scientific research and the uncertainty of its economic effect. Check measuring, design and development, scientific research proper conducted along fundamentally established guidelines and, finally, fundamental studies bring about a more intensive and at the same time a more uncertain and unexpected effect.

We can take as an adequately accurate and universal ruile the following correlation: the higher the order of the derivative whose value is affected by the result of research, the more indefinite the economic effect of this result," and the more profound this effect.

Correspondingly, economic theory is to include indeterminacy as a fundamental concept. Having become an exact science, it must share the common destiny of exact sciences operating with the fundamental concept of indeterminacy.

The indeterminate effect of fundamental studies is an indeterminacy of another kind than the indeterminate effect of scientific research with predictable (though not unequivocal) results or the indeterminate effect of design and development. It limits the prognosis claiming even the least determinacy. Such prognosis should not go beyond the complex of transformations in power engineering, technology, character of labour and sources of raw materials, which guarantee an accelerated growth rate of labour productivity conforming to the concept of "atomic age". Fundamental research is a peculiar memento mori, an indication of the time limits of such a complex.

Such limited prognosis springs from the radical rejection by non-classical science of any absolutes given once and for all. A fundamental feature of non-classical science is seeing its own limitedness and, moreover, containing certain indications of a possible modification of its own basis. But these indications are not sufficient to be implemented in new schemes and ideal cycles, which might become landmarks of scientific and technological progress,

Their prognostic value lies in the possibility and necessity to limit the prognosis in time. We refer to the possible, absolutely new post-atomic conditions of technological progress to follow the complex of interrelated power and technological transformations, which will be realised within several decades, approximately by the year 2000.

The atomic age comprises the forthcoming three to four decades, a period for which we can outline relatively definite scientific and technological prospects and a relatively definite integral economic effect to be achieved by the development of science. In the first half of the century, new integral principles of science appeared, and scientific thought went beyond that which we call general boundaries, no longer separating one branch from another, but one epoch from another. This impulse, generated in theoretical physics, moved from one branch to another at ever greater speed due to the new mathematical apparatus and new experimental methods. The impulse itself increased in an avalanche-like fashion, retaining to some extent the possibility of predicting the direction of scientific and technological progress. There appeared atomic power, quantum electronics, cybernetics, molecular biology, which will be considered in greater detail in this part of the book. They involve the fundamentals of nonclassical physics either directly (atomic power, quantum electronics) or indirectly (molecular biology), their development now having no serious obstacles on the way to transition to new integral foundations of a scientific world-outlook as a whole.

Hence certain permanent regularities in the evolution of economic indices as functions of scientific and technological progress. If productivity of social labour acquires a non-zero first time derivative, i.e., unfading speed, as a result of technical discoveries proper, new technological designs and constructions; a non-zero second derivative, acceleration as a result of scientific discoveries proper, new physical schemes and ideal cycle^^1^^;; and a non-zero third derivative as a result of changed basic principles of science as a whole, then in the coming decades---- tentatively up to the end of the century---we can proceed

92

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

93

nological basis for modern prognostication embodying non-classical science. The realisation of such prediction requires an increase in that which may be termed the intellectual potential of science. The latter depends on the depth and generality of the fundamental problems whose solution requires the establishment of new relationships between differentiated branches of knowledge, the transference of experimental and mathematical methods from one branch of science to another and an increase in the number of these methods.

These basic tendencies: atomic power, electronics, cybernetics do not entail only expanded production, and accelerated expansion at that, they bring about a very great and rapid expansion of that which might be termed spatil and temporal limits of forecasting. Modern science and modern technology penetrate into the microcosm, creating microscopic noozones---zones of rational and purposive ordering of microprocesses. However, processes beginning on the micron level produce consequences affecting the entire lithosphere, hydrosphere and atmosphere of the Earth, and events taking up a millionth part of a second change the course of age-long processes on the Earth. Thus, our age could be termed an epoch of chain reactions.

An essential or rather fundamental result, vital for Man, of the expanded spatial and temporal scale of the effect brought to bear upon the present scientific and technological developments is their impact on Man's ecology, his environment, flora and fauna, the composition of the atmosphere and water, radiation level, the balance of natural resources indispensable for life and industry. A new criterion for the evaluation of scientific and technological projects has emerged, which supercedes the criteria of cost of an established capacity unit, or production cost, etc., but, from a certain point of view, might be considered an economic criterion, if economics, production and labour are taken to cover the totality of the interactions of Man and Nature. The ecological criterion can be the basis of a positive evaluation: modern science and modern technology can bring about changes, necessary to mankind, in habitation, geophysical conditions, the

from the non-zero second time derivative of productivity of labour, from the acceleration of this index as the main inequality characteristic of the period for which the forecast is made.

All this answers the question "Why the year 2000?" But now we are faced with another question: Why is it that precisely now, in the early 1970s, it has become possible to make a comparatively well-founded prediction for the year 2000?

First of all, already in the sixties atomic power plants became capable of competing with thermal coal-burning plants. A later essay dealing with atomic energy will cite the comparable costs of a kilowatt-hour in atomic and coal-burning plants. The fact that the difference between these costs has lessened creates the possibility of a decades-long transition to a predominantly atomic production of electrical energy. Naturally the rate of the transition depends essentially on the degree to which the near equivalence of these costs is replaced by a difference in favour of the atomic plants. Now, however, we are passing the point of intersection of the curves of kilowatthour costs. There are some reasons to believe that the cost of a kilowatt-hour in atomic plants will decrease more rapidly than the cost of a kilowatt-hour in thermal power stations, permitting to predict a consistent increase in the difference in favour of atomic plants. Anyway, forecasts for the future role of atomic energy are based on the proven possibility of a profitable switch-over to the new power balance structure. One may even count on an accelerated transition, since in the seventies the problem of nuclear reactors that produce more nuclear fuel than they consume, will be solved physically and technically.

A peculiar feature of the early 1970s is that the new electronically based technology came closer to the basic industrial processes. Within the same period cybernetics, on achieving successes of great importance for the future communications, processing, storage of information and control, has approached purely industrial problems. These three basic developments---atomic energetics, electronics and cybernetics---have now reached their economic maturity, as it were. The latter form the scientific and tech-

94

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

95

Earth's vegetation, balance of natural resources, and can eliminate destructive cataclysms, making large territories habitable. This criterion can lead to a negative evaluation of the content and scale of scientific and technological undertakings. Anyhow, prediction on a very great spatial and temporal scale is becoming a necessary condition for the correct evaluation of projects and optimistic prognosis, a necessary prerequisite for the realisation of scientific and technological projects.

ideal models, a change in our conception oi physical processes, of the distribution of fields, of the motion and transmutation of particles, and of the transformation oi energy?

The immediate force behind the scientific component of the new revolution was an advance to a novel physical ideal: instead of the static ideal of classical science, research is now confronted by a new, essentially non-- classical, ideal. It can no longer represent the ultimate explanation of reality that leaves the future with nothing more than the task of particularising a conclusively established universal scheme. The ideal physical theory of today is one of maximum approximation to the understanding of the actual harmony of the Universe, one that is maximally in consonance with the sum total of available experimental evidence. This ideal is a changing criterion. It is consistent with the new epistemological credo of science. The idea that "truth is the daughter of its time", first conceived in ancient times, the idea of infinite approximation to the truth, that has been developing through the centuries since its first inception, has been translated into tangible criteria for the choice of scientific theory.

New criteria of scientific theory, new dynamic ideals of creative work in science represent one of the most significant results of the 20th century science. In any reference to the science of the year 2000, we need, first of all, to understand these achievements, the contribution that the science of this century has made to the motive forces and dynamism of progress. "The science of the year 2000" is a symbolic designation of the essence of scientific progress in the 20th century, a symbolic answer to the question: "What epithet will the 20th century claim in the history of science and culture?''

The 18th century was the Age of Reason, the 19th---the Age of Science. Let us take a closer look at the meaning of these two epithets, for they will help us to find an answer to the question concerning an epithet for the 20th century.

During the Renaissance, Reason proclaimed its sovereignty, and in the 17th century it began to claim hegemony. The 17th century, however, was just the dawn of ra-

THE AGE OF EINSTEIN

That period of scientific and technological progress which is the object of prognostication at present is called the atomic age. This age, as was pointed out earlier, is not exclusively characterised by atomic power plants becoming the prime source of power; the concept also covers the repercussive effects, both direct and indirect, of atomic energy. Yet, even this extended conception of the atomic age leaves some of the essential features of late 20th century science out of account. The term "atomic age" conveys the quantitative aspects of atomic power, the level of automation, the degree to which Man relies on electronics---all this will be discussed in detail in other essays--- but it does not reflect the revolutionary dynamism of production and culture that is peculiar to the 20th and the 21st centuries. And it is precisely these areas that hold the key to the new revolution in science and technology. The immediately preceding period, too, was dynamic: since the 18th century, production has been undergoing changes in structure, geographic location, power sources, the level of mechanisation, and manufacturing processes. However---and this was pointed out in the preceding chapter---the 20th century revolution in science and technology is characterised by a novel and greater dynamism: today, change affects not merely industrial structures and processes, but also ideal cycles, ideal models of technological progress.

What is the underlying force of these changes? Why is this century marked by an accelerated rate of change in

96

PHILOSOPHY OF OPTIMISM

tionalism, a dawn of muted, changing colours. In the following, 18th century, a rationalistic model of the Universe emerged that was in keeping with experimental evidence---Newtonian mechanics. It had a most profound effect on every aspect of European society: Engels spoke of the ties connecting the 18th century science with French Enlightenment and the French Revolution on the one hand, and with the English industrial revolution on the other.* Eighteenth century culture was imbued with the austerity and clarity of the rationalistic spirit. The scientific ideal of the time was to have the entire varicoloured picture of the Universe reduced to a monochrome blueprint of bodies moving in a pattern subject to the laws of Newtonian mechanics. This was a static ideal of scientific explanation, the boundary line of scientific cognition.

The social ideals of the 18th century, accordingly, were also static. Proceeding from the system of Newton, whom he elevated to the rank of Demiurge of the Universe, Charles Fourier constructed an ideal society in which abstract thought determined not merely a rational organisation of phalansteries, but also a well-ordered Nature, complete with well-behaved ``anti-lions'' and ``anti-sharks'', and a precisely calculated human life span of 144 years. In spite of their phantastic quality, Fourier's constructions were in line with the scientific style of the 18th century, the great Utopian thinker fully meriting the title of a "social Newton''.

The same static quality characterised the criteria of creative work in technology. The industrial revolution---at least its initial phase---consisted in the development of machine-tools that showed the minimal degree of deviation from ideal mechanical patterns: technological creative work, as was pointed out earlier, was patterned after an ideal physical scheme regarded as the limit of technological improvement.

Such was the Age of Reason, which, naturally, can be placed within the chronological limits of the 18th century for the sake of convenience only. The arbitrary quality of chronological limits of centuries, including the year 2000,

* See K. Marx and F. Engels, Collected Works, Vol. 3, Moscow,

1976, p. 478.

PART TWO. SCIENCE IN THE YEAR 2000

97

becomes apparent as soon as the centuries acquire integrative characteristics. With this reservation, the 19th century can well be described as an age of experimental science: scientific progress was no longer restricted to filling the immutable a priori forms with new empirical evidence. Each time the mind was confronted with a fresh experiment, it was forced to accept new logical and mathematical models which were far from being a priori concepts. The reader will recall Laplace's words quoted earlier, that Reason finds it much more difficult to penetrate itself than to advance. Early in the 19th century, intellectual self-penetration, in other words, the development of new logical and mathematical schemes, was a harder task than the mere advancement of knowledge, i.e., supporting existing schemes with fresh empirical evidence. This, of course, was inevitable: in the 19th century, science was continuously discovering new laws of Nature, which recall Shakespeare's "There are more things, in heaven and earth, Horatio, than are dreamt of in your philosophy. . .". The irreversible advance to states of increased entropy--- undreamt of by the sage men of old and discovered by Carnot, the equally undreamt-of new physical phenomenon of the electromagnetic field, and other facts, above and beyond any a priori schemes, undermined step by step the idea of some sort of an ultimate goal for science, i.e., the construction of a uniform system to embrace the totality of particular laws. Doubt, however, gnawed only at the idea of reducing every reality to mechanics: hardly anybody doubted that mechanics itself could be conceived only as Newtonian mechanics. There was even less questioning the absolute truth of Euclidean geometry, although Nature provided no perfect equivalents of the latter. A smooth physical surface could not represent a plane: the surface, it was found out, was made up of molecules. A beam of light could not be representative of a line, for it, too, was out a wave motion. Once free of such straight-line physical equivalents, geometry was able to call to life the most amazingly unexpected constructions---representations of "intellectual self-penetration", divorced from mere `` advancement''. The result was the emergence of multi-- dimensional geometries, multi-dimensional abstract spaces,

7-01545

98

PHILOSOPHY OF OPTIMISM

I'ART TWO. SCIENCE IN THE YEAR 2000

99

in which the position of a point was defined not by three, but by four or more coordinates. The result was the Lobachevsky geometry in which the sum of the angles of a triangle was less than 180°, and the Riemann geometry, in which the sum of the angles of the triangle was greater than 180°. Those were breakthroughs of Reason free from the straight jacket of physical equivalents, Reason that, as it constructed ever new paradoxical logical and mathematical models, felt amazed at their consistency, their perfect logic, but never paused to think, systematically, of these paradoxical models as forms of a paradoxical reality.

The theory of relativity changed the relation between "intellectual self-penetration" and "intellectual advancement". The special theory of relativity provided physical equivalents for four-dimensional geometry, while the general theory of relativity did the same for non-Euclidean geometry. Thus the concept emerged not just of a paradoxical opinion, view or theory, but of paradoxical reality, a concept that has proved eminently revolutionary, lending a greater dynamism to 20th century science and technology.

Paradoxical reality is seen as the materialisation of the fundamental concepts of science and ideals of scientific explanation in ways that are non-traditional and running counter to tradition: the objectives of scientific creative work change and become fluid as scientific progress gains momentum.

Let us now take a look at that characteristically 20th century synthesis of a changing logical and mathematical apparatus of science and experimentation, which contributes to the greater scientific dynamism.

In creating his relativity theory, Einstein proceeded from two criteria for the choice of a physical theory. Since these criteria, which have been mentioned above, require at this point to be explained in somewhat greater detail, I shall discuss---briefly and in popular terms---some of the concepts of that theory. Up to this point, the relativity theory could be mentioned without any such explanation: the time has come, however, to present the criteria for the choice of physical theory in a more detailed manner.

The criteria under consideration led science toward a fusion of Laplace's "Advancement of Reason" with " Reason's self-penetration". Einstein described the first criterion as external justification represented by the consistency of theory with empirical observation. If a theory is consistent with observable facts, including new, unexpected, paradoxical ones, it follows that by putting forth that theory, Reason advances providing explanation for fresh facts.

The second criterion is the "inner perfection" of a theory: a theory should, preferably, leave no room for ad hoc assumptions, i.e., assumptions adduced to explain a particular fact, but should proceed from the most general initial assumptions. The fundamental importance of Einstein's theory of relativity for science, culture and style of thinking was that it explained certain paradoxical facts from general concepts which spelt transformation of Reason itself and the arrival of new tools of knowledge, of new scientific ideals.

What are these facts? What is their explanation?

The starting point of the theory of relativity is the experiment which shows that the velocity of light propagation is identical in systems moving relative to one another. In relation to such systems, the velocity of light is identical in both cases. This constancy contradicts classical mechanics and, on the face of it some obvious facts, too. It violates the classic rule of velocity addition, which says that if a person walks at a speed of 5 km/hr in a train moving at 70 km/hr, in the direction of travel, his speed relative to the rails is 70+5 = 75 km/hr. Light, however, moves at a constant speed of 300,000 km/sec relative to the train, the rails, and even to an approaching train. It was on this paradoxical fact that Einstein based his theory. He discovered here a very general principle: motion consists in change of distance between a moving body and some other bodies of reference. These other bodies may just as well be said to be in motion while the body seen as moving may, by the same token, be said to be motionless. Bodies move relative to each other: motion unrelated to other bodies---absolute motion---is, in physical terms, meaningless.

7*

100

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

101

A similar concept had been known to mechanics since the 17th century: where a system moves at zero acceleration, nothing occurs inside it to show its motion. Thus, Galileo cited the example of a ship's cabin, with butterflies flying around in it, water dropping through a narrow opening, smoke rising vertically---regardless of whether the ship is sailing or anchored. If inner mechanical processes offer no clue to motoin, it follows that the latter represents no more than a changing distance or distances between bodies and is, therefore, relative.

Optical processes, however, were seen in classical mechanics in a different light. All space was believed to be filled with ether, with waves of an electromagnetic nature ---or light---travelling through it. Motion relative to ether was regarded as absolute and capable of being detected fr.om processes occurring inside a moving system. Accordingly, in Galileo's cabin, light from a lamp placed at the wall closer to the prow would -reach a screen at the wall closer to the stern faster if the ship were moving. In this case the screen and the light are moving towards each other. In the late 19th century, however, optical experiments conclusively demonstrated that light travels at a constant speed in all systems---whether motionless relative to their ether or moving. The classical concept of the addition of speeds had to be discarded, and it had to be admitted that optics, too, had failed to save the concept of absolute motion, that motion could not, under any circumstances, be detected from processes unfolding inside a system in motion.

Lorentz, in an attempt to save the classical conception of speed, suggested that all bodies moving in ether undergo a dimensional change to a degree which compensates for the change in the velocity of light in such bodies. Thus, it was claimed that absolute motion exists, but because it manifests itself in changes in the velocity of light, it is undetectable. Lorentz's hypothesis was "externally justified", correlating with observable facts, but lacking "inner perfection": it was an artificial, ad hoc stopgap devised especially to account for the results of optical experiments.

Einstein, starting from highly generalised assumptions,

pointed out the physical meaninglessness of the concepts of absolute time and absolute simultaneity.

Classical physics assumed a uniform, identical instant occurring simultaneously throughout the Universe as something self-obvious. The flow of Universal absolute time was seen as consisting of such simultaneous instants. But what is the physical meaning of the identity of two instants, or of the simultaneity of two events occurring at those instants? Einstein refuses to attach any physical meaning to the simultaneity of events occurring in removed spatial points unless the events can be synchronised by proving that clocks placed at such spatial points are synchronised. Newton was able to postulate such synchronised action: he assumed the possibility of forces propagating instantly, at an infinite speed. If the Sun attracts the Earth and the interaction of the two celestial bodies propagates instantaneously, simultaneity can be assumed to exist: the impulse is generated by the Sun and affects the Earth at the same moment. A light signal travelling instantaneously would provide a similar justification for identifying instants and synchronising distant events. Two clocks placed in different spatial points could be synchronised by being connected by an ideally rigid pin. But in the final analysis, however, all these propositions are illusory: in the real world, fields of forces, light signals, and tensions generated in pins are transmitted at finite velocities. It remains to identify the moments of (1) the departure of the signal from one point and (2) its arrival at another point less the time taken by the signal to travel the distance. This would present no problem if the points in the case were motionless relative to the ether or if their motion were related to the ether. Immobility and motion relative to the ether have, however, been shown to be meaningless. Measurements of the time taken by a travelling signal, if obtained in different systems which are in relative motion, produce different readings. By placing a lamp amidships to light a screen placed in the prow of the ship, it would be easy to synchronise clocks positioned in the prow and amidships: all that is required would be to make a suitable correction for the velocity of light relative to the vessel, and to subtract the time taken

102

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

103

by the light to travel from its source to the screen from the time the latter becomes illuminated. The result will be the time of the flash. For a shore-based observer, however, watching the lamp and the illuminated screen--- if the vessel is sailing parallel to the shore---the distance travelled by the light and the time taken thereby will be greater: the screen will move away, as it were, from the lamp. For this reason, a synchronisation of the kind described above would be possible only where the bodies carrying the distant points at which the synchronised events occur are either relatively motionless or observed within the same frame of reference. In actual fact---and this was demonstrated by Einstein---it is just as correct to observe them within a different system, in which case what seemed motionless would move and vice versa, with results obtainable by synchronisation differing accordingly. In the theory of relativity what appears as simultaneous within one frame of reference will be non-- simultaneous in another. Absolute simultaneity, independent of any frame of reference, does not exist.

As soon as we have discarded the concept of instantaneous processes, we must do the same for the classical concepts of time and space relations. If an instantaneous process, i.e., one that occurs in space in zero time, is assumed, the concept of three-dimensional space becomes physically meaningful. In relativistic physics, however, i.e., in physics deriving from the theory of relativity, that concept loses such physical meaning. All that happens in the world is motion at a finite speed, in space and time. Physical equivalence attaches not to space and time taken separately, but to a four-dimensional space and time. Each elemental event---the presence of a particle in a particular point at a particular moment---is characterised by four coordinates, three spatial and one time coordinate, i.e., the time of the occurrence of the event.

These four numbers---three spatial and one time coordinates---form a world point, defining the time and space position of the particle. A change in these positions, i.e., motion of the particle, is represented by a four-dimensional world line, or the totality of universal points. These concepts were discussed in Part One of the book.

Thus, "Reason's self-penetration", the construction of new logical-mathematical models of a multidimensional geometry in the present case acquired physical meaning and became identical with an explanation of fresh facts, with "advancement of Reason''.

Einstein demonstrated that the velocity of light is the maximum speed a moving physical object---``signal'' in Einstein's terminology---can approach or be equal to. Causality is realised in Nature through such signals. The cause-and-effect sequences are processes occurring in space and time and characterised by a finite speed equal to the velocity of light. That, in brief, is relativistic causality.

It has just been noted that the velocity of a moving particle cannot be greater than the velocity of light. In the case of a particle receiving successive impulses of the same intensity, the effect of the latter becomes progressively less pronounced as the velocity of the particle approaches that of light. This proposition can be restated in this form: as the velocity of a particle approaches that of light, with consequent increase in the energy of the particle, the mass of the latter grows infinitely where its velocity approaches the velocity of light. Body mass is proportional to energy. Einstein extended this proportional relationship to a particle having zero velocity. The abovequoted equation correlating mass and energy contains, in embryonic form, a relativistic civilisation---atomic power and its implications for power generation, technology, culture, economics and science.

At this point, it is time to apologise to the reader: instead of prognosticating, we have engaged, and for too long, in a simplified account of the relativity theory, a theory which emerged at the beginning of the 20th century whose end is the object of prognostications. However, no definition of the 20th century as a whole would be possible without reference to the Einsteinian concepts, the very foundation of scientific prognostication for the late 20th and the 21st centuries. Since the discussion concerns the Einsteinian concepts, at least a cursory introduction to the relativity theory is in order: it is a copybook maxim that one who goes to see Hamlet should

104

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

105

not be surprised to see the Prince of Denmark on the stage.

The foregoing discussion of the relativity theory should be supplemented with one more explanation.

No physical theory can provide an exhaustive account of Nature and be, in that sense, complete. The relativity theory, however, was one of the first theories that started out by pointing a finger at its own imperfection and those points in it for which a more comprehensive---but again not the final---account should be sought. That is the style of non-classical physics. In his autobiographical sketch of 1949, which is now seen as his scientific testament, Einstein said that the relativity theory clearly misses certain points. Here the concept of frames of reference is defined: a body capable of being infinitely extended in any direction and consisting of a plurality of criss-crossing lines, so that any other body whose motion is under consideration may touch these lilies. This contact identifies the spatial position of the body. A four-dimensional space and time frame of reference also comprises a clock---a regularly recurring process required to count time. The clock may be placed next to the point of intersection of three lines to permit identifying not merely spatial position, but also position in time---a four-dimensional space and time localisation of the body under study.

A space and time frame of reference provides readings of differing space and time intervals, stretches of universal lines depending on the manner in which the bodies making up the world move in space. The world structure breaks down into the world lines of its component particles. The body of reference itself, however, appears in the Universe as a state within a state. It does not, as it were, consist of particles; there are no universal lines inside the body of reference, the lines or the clock, at least the relativity theory says nothing about them, it overlooks the discrete structure of the lines and the clock. Einstein says in his autobiographical sketch that the relativity theory introduces two kinds of physical objects: (1) measuring rods and clocks, and (2) the rest of the world. Says Einstein: "This, in a certain sense, is inconsistent; strictly speaking measuring rods and clocks would hav« to be represented

as solutions of the basic equations (objects consisting of moving atomic configurations), not, as it were, as theoretically self-sufficient entities."* The same problem was later dealt with by W. Heisenberg. In his view, measuring rods and clocks are comprised of many elementary particles, they are acted upon, in a complex way, by vector fields; for which reason it is not clear why their behaviour should be subject to description by very basic laws.::":;'

To link the behaviour of measuring rods and clocks with their discrete structure would be to deduce the macroscopic laws regulating body motion, as formulated by the relativity theory, from the existence and behaviour of the smallest particles.

That is the manifestly broad task which the physics of the early half of the 20th century sets for the next period--- a task to be discussed later on. The important point to be made now is that the relativity theory identified its own boundaries, regarding them as bridgeheads from which a new, more general theory should be reached, rather than as the absolute limits of knowledge. The critical comments on the relativity theory made in Einstein's autobiography were, in effect, a prevision of the future development of the theory. By pointing out the open boundary of the theory, it traced the outlines of the new, future theory, with a greater measure of "inner perfection" and of " external justification". It is for this reason that the foregoing explanation is of such overriding importance for presentday scientific prognostication.

The importance of this explanation becomes all the more clear if we bring to mind another open frontier of the special theory of relativity, another approach to a more general ("inner perfection"!) and more precise (" external justification"!) theory---an approach used by Einstein in 1916.

This approach is the general theory of relativity. Einstein's theory of 1905 is called the special theory of rela-

* Albert Einstein: Philosopher-Scientist, Ed. by Paul Schilpp, Evanston, Illinois, 1949, p. 59.

** W. Heinsenberg, "Notes on Einstein's Sketch of Lhe Ui. Theory of Field", Einstein and the Development of Physical Mathematical Thought, Moscow, 1962, p. 65 (in Russian).

106

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

107

tivity because it only holds true for one special kind of motion---unaccelerated, constant-speed motion by inertia, i.e., straight-line, uniform motion. Uniform straight-line motion is the only kind that is undetectable from processes occurring inside a moving system. Where the system, say the vessel of Galileo's proposition, is accelerated, the bodies inside that system would receive an impulse, related to inertia. By rotating, the system would develop internal centrifugal forces of inertia. In this case, apparently, there is no equality between the coordinate frames of reference. In the experiment cited by Newton in support of the absolute nature of accelerated motion, a bucket full of water was rotated on a twisted rope. The centrifugal force caused the water to rise toward the edges of the bucket. If the bucket had been suspended motionless, wifh all other objects rotating about it, the water would not have risen. It presumably followed, from the above that the cause was not the relative motion of the bucket and other objects; hence, it was argued, the centrifugal force is caused by rotation relative to space per se, i.e., absolute rotation, rather than by rotation relative to other objects.

By extending the concept of relativity to accelerated motion, Einstein showed the fallacy of this argument, noting that, under certain circumstances, the forces of inertia and gravity are indistinguishable. Einstein cites an elevator at rest, which is in the field of the Earth's gravity, and an elevator which is going up outside a gravity field at the same rate of acceleration that would have been caused by gravity. Every manifestation of the forces of gravity in the former case and of the forces of inertia in the latter are identical. In accelerated upward movement, the force of inertia will hold the soles of the passengers' shoes to the floor and pull at the strings by which objects are suspended from the elevator ceiling in the same manner as gravity would pull at the strings in a motionless elevator.

Just because the forces of gravity and inertia defy distinction there is no reason to regard the force of inertia as proof of absolute motion: precisely the same phenomena will occur in a motionless or uniformly moving system which is acted upon by gravity forces. With the

disappearance of the criterion of absolute motion, the relativity theory becomes a general theory: whatever the changes in the motion, whatever the frame of reference from which it is observed, whatever the system of bodies in that frame of reference, whether motionless, uniformly moving along a straight line, or accelerated, the internal processes will not permit detection of these changes.

This proposition presupposes additional assumptions, however. Let us consider a thin shaft of light that traverses the elevator interior. With the elevator going up, the light spot on the opposite wall will shift down. On the other hand, it would seem that no such shift should occur in a motionless elevator subjected to a gravitational force. In that case, we would have absolute proof of motion. This proof, however, will become invalid if light has weight, i.e., if it is subject to gravitational force. The general theory of relativity will hold true if light has weight. Eventually light was, in fact, found to have weight: this was demonstrated in 1919 by establishing the deviation of light rays in the vicinity of the Sun.

There is still another problem involving the general theory of relativity. In the elevator, it is practically impossible to establish that the forces of gravity and inertia are differently oriented. Two objects suspended on threads from the ceiling will pull at the threads in parallel directions if the elevator moves up with some amount of acceleration, i.e., where inertia comes into play. But where the elevator is motionless and subject to the Earth's gravity, the threads, instead of being parallel, will be oriented toward the Earth's centre.

Let us now make a slight digression. In a discussion of the relativity theory as one of the fundamental sources of the new scientific and technological, economic and cultural trends, as one of the key developments in the evolution of Man's spirit and material conditions, a question, mentioned earlier, naturally comes up: Is it really possible that a particular orientation of a shaft of light in a closed space, a particular angle at which objects are suspended in that space, or any of a dozen of similar imaginary or actual experiments could affect our way of thinking and Man's power over Nature?

PART TWO. SCIENCE IN THE YEAR 2000

109 108

PHILOSOPHY OF OPTIMISM

That a countless number of models involving mirrors, screens, lamps, rulers, etc., should have that effect is, indeed, amazing. Yet, it is no more amazing than the effect of Galileo's cabin, with flying butterflies, water dropping into a vessel placed directly under it---all regardless of whether the vessel is sailing or anchored. (This phenomenon, described in Galileo's Dialogue, led to his trial in 1633 and a tremendous response in the Catholic, world, and many other historial developments). Nor is it any more amazing than the effect of Newton's experiments described in Philosophiae naturalis principia mathematica, that have historical links with the French Revolution and the English industrial revolution. Finally, it is no more amazing than the effect of those abstract, vague Hegelian periods that Herzen saw as the "algebra of revolution"--- a notion that was fully confirmed by the developments of the late 19th and of the 20th centuries.

I shall have occasion to take up these amazing connections and effects later in this discussion. For the moment, however, let us once again return to our brief account of the general theory of relativity. Einstein noticed a difference between the forces of inertia and of gravity, viz., that the latter are, generally, heterogeneous. This heterogeneity, however, can be eliminated. Without entering upon a discussion of the ways in which Einstein succeeded in this, we shall describe just the outcome of his effort. Einstein sees gravitation as change in the geometrical properties of space. In the absence of gravitational fields, these properties are in keeping with Euclidean geometry: two parallel lines are spaced at a constant distance, the sum of the angles of a triangle equals the sum of two right angles, two lines normal to a third line are parallel---they do not diverge or converge no matter how far they may be extended. That physical processes are subject to this geometry is manifested in the fact that bodies unaffected by any forces follow paths in keeping with Euclidean laws: the world lines of bodies are straight world lines; these lines form Euclidean triangles, the sum of whose angles equals the sum of two right angles; in fact, these lines do not essentially differ from those described by Euclidean geometry. The law of inertia can be stated thus: the world

lines of bodies unaffected by any forces or, in other words, the behaviour of bodies, which depends on the properties of space rather than on the interaction of given bodies, are governed by Euclidean geometry; the geometry of the world is Euclidean; as special units are used to measure time, this geometry is called pseudo-Euclidean. Such is the geometry of the world as seen by classical physics: body motion in space is straight-line and uniform, the world lines---unless subject to outside forces---are not curved either in space, i.e., they remain straight, or relative to the time axis, i.e., the absolute value of speed is maintained. Any curves in world lines are believed to be caused by interaction. In the general theory of relativity, world lines are divested of their Euclidean properties, and so are space and time. This phenomenon may be viewed as distortion: on a curved surface, lines representing the shortest distances and corresponding to straight lines on a plane are governed by a different geometry. It will suffice to recall that meridian lines which are normal to the equator converge at the poles, and that the sum of the angles of a triangle formed by sections of two meridians and the equator is greater than the sum of two right angles. The transition from the Euclidean properties of two-dimensional space---a plane---to non-Euclidean properties may be regarded as distortion of the two-- dimensional space. Distortion of three-dimensional, to say nothing of four-dimensional, space, is not as easy to imagine---yet, Einstein did precisely that. He achieved this by discarding the Newtonian distinction between the flat, Euclidean space and the interaction of bodies which distorts their paths. Gravitation---since it distorts the world lines of all physical objects---may be viewed as distortion of the whole totality of world lines, the entire four-- dimensional space. Einstein's }aw of gravitation is represented by an equation with quantities measuring the distortion of space and time on the one side and, on the other, quantities reflecting the distribution of masses, of all energy concentrations and the impulse, of every factor that contributes to the distortion of space and time, rendering it non-Euclidean, in other words, a source of gravitational fields.

110

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

11 i

of all time fruitless? Was the search for a unified theory of field a pointless waste of intellectual endeavour? In the 1930s and 1950s many doubted that the search would produce any results. Today, this question can only be answered with a certain redefinition of the term ``result'' as applied to scientific endeavour in physics. The search for a unified theory has yielded no results in the sense that no equations were found to describe both the gravitational and electromagnetic fields. Moreover, science has discovered a host of new fields---in addition to that of gravitation and electromagnetism characterised by a variety of particles. It follows that the present-day task goes beyond unification of the gravitational and electromagnetic theories to the evolving of a theory to determine the values of mass, charge, and other properties of each kind of particles from a common set of equations.

Sometimes, a search for a new solution eventually produces the recognition that such a solution is impossible. This was the case with perpetuum mobile, and, later, with phenomena capable of showing motion to be relative to ether. The search brought about the discovery of the law of conservation of energy in the former case, and the relativity theory in the latter. A different situation is also possible: futile research has occasionally represented, in the history of science, questions as yet unanswered----- queries addressed to the future. Unlike problems in the former class, these questions do not wither away: they are repeatedly raised anew, as part of a legacy passed on to each succeeding epoch. That is a very important result of the scientific endeavour of each period. In terms of dynamism and transition to new levels of knowledge, rather than of levels of knowledge as such, the ``querying'' aspect of science---in a way that is fundamental to prognostication---proves to be at least as important as its `` answering'' aspect.

The reader will see later that the central prognostication for the end of this century is to find a solution to Einstein's problem of a unified theory of field or rather of a unified theory of elementary particles, which is now growing more and more urgent. It will be remembered, of course, that this prognostication is no more than a coded

The special and general theories of relativity have been discussed above very briefly, only as far as is required to bring out the main feature of the style of 20th century thinking in physics---a fusion of "intellectual self-- penetration" and "intellectual advancement". It is this fusion that is the fount of the gigantic intellectual potential of modern science. After Einstein and Bohr, modern science has never hesitated to overhaul the most general and fundamental concepts. What would have seemed to go beyond all limits in paradoxy early in this century is now received with the sceptical observation: "It's not crazy enough to be plausible." But that is not all there is to it: the boldest and most paradoxical suggestions to transform fundamental concepts are subject to potential experimental verification, potential "external justification", potential accumulation of experimentally verified, unequivocal propositions, in short, they must tally with " intellectual advancement''.

And that is what the potential of modern science is all about. But for a prognostication to be viable, knowledge of the potential must be supplemented with that of the probable directions of future progress. To know the direction of a future flow of water, knowledge is required of the water level in the reservoir and of the thalweg which will channel it. Unsolved problems provide just such thalwegs for science. The reader has been shown one of them, viz., formulation of relativistic behavioural laws for measuring rods and clocks from their corpuscular structure. This is the task formulated by Einstein in his summary of the special theory of relativity. The task outlined in the summary of the general theory of relativity was different. The latter theory is a theory of gravitation. The question that comes up next is "What about other kinds of fields?" At the time the general theory of relativity was being created, two fields were known ---gravitational and electromagnetic. Einstein spent thirty years of his life in an effort to evolve a unified theory that would embrace the laws of both gravitation and electromagnetism. The search never solved the problem: no unified theory of field was created. Was that tremendous effort by the greatest genius known to physics

112

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

] j;j

reaching revolutions in thinking, the style and logic of scientific thinking---the most outstanding cases of '' intellectual self-penetration"---are inseparable from the positive acquisition of knowledge relative to an ever increasing array of facts, i.e., "intellectual advancement". This fusion of the two manifestations of intellectual endeavour, presented by Laplace in opposition to each other, is the key to the understanding of 20th century science. It found its best expression in the merger of the criteria of "inner perfection" and "external justification"---a development which had the relativity theory as its major consequence. The key message of this book is to show that the prospect of further and closer fusion between " intellectual self-penetration" and "intellectual advancement" provides the point of departure for forecasting scientific progress. It should be emphasised again that the discussion may not concern logical self-development of concepts as a fundamental motive force of progress. Such self-- development was at all times based on experiment, industrial activity, empirical verification, i.e., on "external justification". Today, the development of concepts and ideas takes the form of actual or gedanken experiment. The relativity theory is characterised by a relentless weeding out from the concept of the world of all notions which, in principle, do not lead to experiment, and of all concepts---such as motion in the ether and of ether itself---which are experimentally unverifiable. Precisely for this reason, an account of the relativity theory generally calls for reference to experiments involving mirrors and shafts of light---- experiments that, in the final analysis, prove so vital to intellectual evolution and the development of Man's productive forces.

The experimental style of modern scientific thinking, the fusion of "inner perfection" and "external confirmation", the fusion of "intellectual self-penetration" and " intellectual advancement" give rise to the more advanced form of dynamism that is peculiar to modern science. There was a time when scientific concepts were derived from logical schemes which appeared unshakeable: experimental data were absorbed by science without disturbingits basic principles. Today, experiments provide a tool for

8-01545

name for the statement of a current trend.

To give the term ``prognostication'' a clearer meaning would take more than a mere statement of the intellectual potential of science, a statement of the general nature and breadth of its initial assumptions and of the fusion of ``self-penetration'' and ``advancement'', and of the thalwegs, i.e., outstanding problems awaiting their solution. The task stated above requires consideration of those forces of scientific development that contribute to its effect, and to the sum total of intellectual and physical effort allotted by society for solving scientific problems. • So far the discussion has concerned the concepts of relativity, a unified theory of field, collision of concepts, fusion of the processes of acquiring greater logical depth in conceptual terms with knowledge of the world through experimentation---all of which have been stated to be the building blocks of scientific prognostication. Does it follow, then, that concepts and ideas rule the world, that they are capable of accounting for scientific and social progress?

The answer is "No, concepts and ideas do not rule the world". In the final analysis, the motive force behind social progress is the evolution of productive forces. This role of productive forces has been demonstrated by the entire history of mankind, and most dramatically by the history of the past .several decades. It is common knowledge today that the release of atomic energy determined fundamental present-day techno-economic, social and cultural processes. It is just as much common knowledge that this release of atomic energy was brought about not by a logical spontaneous development of the idea in somebody's mind, but by industrial development and experimentation. Nowadays, there is no distinction between science and industry in the sense that the two form a total, of which each part depends for its existence and progress on the other. Insofar as experimentation is concerned, modern science is imbued with it, and to a greater degree than ever. A remark will be in order here about the astonishing effect of experiments involving mirrors, shafts of light and elevator cabins. It is a fundamental characteristic feature of the present that the most far-

114

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

115

radically revising basic principles. Hence the new dynamism of scientific and technological progress. Earlier we have cited examples of the use of check tests in factory laboratories to assure a required technical standard, of the search for new structures and techniques to assure non-zero-speed technological progress, of the quest for new ideal physical schemes to assure acceleration for that progress, of basic research which results in increased acceleration. Twentieth century science is characterised by precisely this dynamism of fundamental principles, which have themselves become subject to experimental verification---and that is what basic research is all about. This dynamism links up with the fusion of logic and experiment, of "intellectual self-penetration" and "intellectual advancement''.

Related to this fusion is the obvious dependence of scientific advancement on progress in production---a factor discovered back in the 19th century. I shall now try to find out, from this dependence what is the deeper fusion of "intellectual self-penetration" and "intellectual advancement" that follows from a wide application of relativistic and quantum physics in production. For it is my belief that such fusion is the main prognostication for late 20th century science.

THE ATOM

links of the hierarchy of discrete quantities of matter which have not previously been subjected to Man's ordering activity. We shall now discuss the atomic nucleus and nuclear reactions, an area of transition from the laws and relationships controlling the life of stable nuclei to the laws of nuclear fission and fusion and their transformation into new and different nuclei. Essential to nuclear fission is a certain critical mass in which nuclear fission becomes a chain reaction. Fissionable material obtained in critical mass units is an example of initial conditions that predetermine the course of a desired process. In the case of nuclear fusion, the initial conditions include a very high temperature. In both instances, it is a case of rearranging the initial conditions in such a manner as to start a predictable process, i.e., a process capable of providing the end result of human activity. Units of fissionable uranium or plutonium are essentially comparable to a concentrated water drop between the head race and tail water of a dam or to a temperature gradient between the boiler and condenser of a steam machine. Of course, in nuclear power generation technology we deal with immeasurably greater power gradients, with a plastic, purposively controllable structure handled within very narrow time and space limits. At this point it might be a good idea to digress from our subject for the benefit of those readers who expect an explanation as to what is under discussion, i.e., the particular kind of nuclei, fission and fusion processes. Such readers are probably a small minority: that nuclear processes can release 2,200 kilowatt-hours from every gramme of fissionable material is virtually common knowledge today. The interests of a minority, however, should not be overlooked, and a short explanation is due.

The relativity theory related the energy of a body to its mass by the equation E = mc^^2^^. Later, this relation permitted to account for an important feature of nuclear physics. Now, the mass of a nucleus is somewhat smaller than the total masses of the nuclear particles---protons and neutrons---which compose the nucleus. The difference called "mass defect" varies in different elements. In terms of the relativity theory, this difference can be accounted for by

In Part One of this book, in the essay "Initial Conditions", mention was made of noozones---those areas in which the laws of a particular phenomenal series are transformed into different laws peculiar to a different series and irreducible to the former. We have now come to a point where such transformations can be documented by specific illustrations showing that it is just such areas that provide the greatest flexibility for Man's purposive creative effort to set up the initial conditions, bring order into the world, and establish an initial negentropy which, in a smaller or larger measure, would predetermine the course of objective processes. In modern science and technology, noozones are increasingly represented by those

116

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

117

the energy required to bind the nuclear particles, by. the difference in the energies of the nuclear particles taken separately, i.e., the total energy of a decayed nucleus, or the total energies of nuclear particles which have not yet combined to form a nucleus---and the entire energy of the nucleus. The energy of a nucleus is less than the total energies of its component particles, hence the mass of the former is less than that of the sum of individual particles. When the particles combine to form a nucleus, a part of the energy is released with resultant reduction in the mass. In those elements where the particles are packed more closely, the difference between the energy of the nucleus and the total energies of its component particles is greater, and so is the mass defect. In other elements, the nuclear particles are not as close-packed and thevmass defect is less. It will be understood that what is meant here is not the difference deriving from the relative size of the nuclei---these may comprise several particles, dozens of them, up to more than two hundred as in the nuclei of the heaviest elements---but rather the mass defect per particle, i.e., specific mass defect.

Let us suppose that we have re-grouped the particles by packing them in the nucleus in such a manner as to increase the mass defect. In that case, part of the energy will be released due to the more compact and economical nuclear structure. What elementary transitions will correspond to such energy releases?

Mendeleyev's Periodic Table opens with hydrogen whose nucleus consists of a single proton and, consequently, has no mass defect. The nucleus of the next element, helium, consists of two protons and two neutrons, which points to a considerable mass defect: fusion of helium nuclei from hydrogen nuclei, i.e., protons, and neutrons, would release a relatively large amount of energy. The middle spectrum of the Periodic Table includes elements with a greater specific mass defect than either the light elements at the beginning of the Table or the heavy elements at the end. Therefore, by dividing the uranium nucleus of 238 particles into two nuclei of 115 to 120 particles each, a more economic particle arrangement and a correspondingly greater specific mass defect could be

achieved, thereby releasing some energy. The amount of energy released would be small, corresponding to the entire mass: the operation would not involve the energy that is close to the mass of particles multiplied by the square of light velocity. Even so, the amount of energy released by fission as described above is millions of times greater than the energy obtained from the same amount of matter by atomic re-grouping in the molecules, of the kind that occurs in combustion. Energy in atomic physics is generally measured in electronvolts (eV). An electronvolt is the energy gained by an electron which overcomes a difference of one Volt in voltage levels. Fission of a single uranium nucleus releases 200 million eV of energy, or several million times more than the amount released by an atom in a chemical reaction, e.g., combustion. A single gramme of uranium produces more heat than the burning of three tons of coal.

The practical possibility of re-grouping nuclear particles in nuclei having a greater mass defect and of utilising the difference in mass defects was first perceived in the 1930s. It was in the early thirties that discovery was made of particles mentioned above having a zero electric charge and termed neutrons. Electrically neutral, they are not subject to the Coulombian repulsion exerted by the nuclei, which they can easily penetrate, triggering off nuclear reactions. The kind of reactions known until the late thirties was one of radioactive decay in which one or several nuclear particles are ejected out of the nucleus, with the element shifting into the next or a nearby square in the Periodic Table. In 1939, it v/as discovered that neutron bombardment of uranium results in the nucleus splitting into two nearly equal halves---the atomic nuclei of elements placed in the middle of the Periodic Table. With the difference in mass defect of 200 million eV each nuclear particle was found to save about a million electronvolts. The release of this energy, which corresponds to the reduction of the uranium nuclear mass by splitting, represented by the kinetic energy of fragmented uranium nuclei and by radiation, is accompanied by fresh neutrons being ejected out of splitting nuclei and hitting other nuclei, thus triggering off, under certain conditions, a

118

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

119

chain reaction. In other words, the first neutron, whose presence in uranium may be spontaneous or caused by cosmic rays, will trigger off the fission of the entire mass of uranium.

A chain reaction will not stop if the number of neutrons released by fission, is on the average, more than one, i.e., where more than one neutron is released for each trapped neutron. The development of a chain reaction is impeded by neutrons being trapped in nuclei which fail to split. If, in each set of fresh neutrons released, too many are trapped by nuclei which fail to split, no chain reaction will occur. Natural uranium consists largely of two isotopes: uranium-238 having 238 nuclear particles and uranium-235 with 235 nuclear particles. (A third isotope, uranium-233 with 233 nuclear particles, is found in natural uranium in very small quantities). There is 140 times as much uranium-238 as uranium-235. The nuclei of these two isotopes show different reactions to being hit by a slow neutron (with energy level 2 million eV or less), A uranium-238 nucleus trapping such neutron becomes a nucleus of a new isotope, uranium-239, thus failing to split. Each fresh neutron has a much greater probability of being trapped by a uranium-238 nucleus than of triggering off a chain reaction of fission.

Accordingly, chain reactions do not occur in natural uranium. On the other hand, the picture is quite different in the case of pure uranium-235 whose nuclei are split by a neutron hitting them, thus triggering off a chain reaction. Its production, however, requires a special condition: if a lump of uranium-235 is small, most of its neutrons will escape without triggering off a chain reaction. To get a chain reaction, a lump of uranium-235 of not less than a certain critical mass is required.

Let us now take a look at what happens when a uranium-238 nucleus traps a neutron. The result is a nucleus of uranium-239, an unstable isotope which soon decays into an isotope of neptunium-239---a new artificial element in the Periodic Table and one of the first elements heavier than uranium---termed transuranium elements. The neptunium with a half-life of 2.3 days becomes an iso-' tope of plutonium, whose nuclei are split by neutrons like

those of uranium-235.

Neutrons having an energy level of less than 2 million eV split uranium-235 and plutonium. These neutrons could maintain a chain reaction in natural uranium if only the probability of their being trapped by uranium-238 could be reduced. Very slow neutrons stand a better chance of escaping such trapping. So the question is: How do we cause the comparatively fast neutrons which result from the fission of uranium-235 and have an average energy level of 2 million eV to lose some of their velocity to a point where their energy level is just a few electronvolts or lower, before they run into uranium-238 nuclei? Given such a low energy level, neutrons would escape trapping by uranium-238 nuclei and trigger off the fission of uranium-235, thus starting, under suitable conditions, a chain reaction. The problem would be solved by interspersing a layer of natural uranium with materials that have the effect of slowing down neutrons without actually trapping too many of them. Such effect could be produced by hydrogen---neutrons are slowed down by elastic collision with its nuclei. Unfortunately, hydrogen nuclei too often trap neutrons, forming the nucleus of heavy hydrogen---deiterium. So, our purpose of triggering off a chain reaction in natural uranium would not be achieved by the use of water with its vast reserves of hydrogen: as a neutron moderator, water is usable only with enriched uranium which has a greater uranium-235 content than is found in nature. Deiterium or heavy hydrogen, whose nucleus has a proton and a neutron, is less of a moderator, so that natural uranium can serve our purpose with heavy water, which contains deiterium rather than hydrogen. Another neutron moderator is graphite: uranium rods in graphite blocks were used in the earliest nuclear reactor.

And now a few words about reactors using uranium nuclear fission to produce heat and electric power. Fragmented nuclei have a high level of kinetic energy which is released into the ambient medium, with resultant increase in the temperature of the latter. To prevent damage to the reactor by excess temperature, cadmium rods capable of absorbing large numbers of neutrons are inserted in the reactor core where uranium fission occurs. The rods

120

PHILOSOPHY OK OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

121

are used to control the reaction and the release of heat. The heat thus obtained is transmitted by a heat carrier--- water, liquid metal or gas which is inert or nearly so. Nuclear power generation technology and the nuclear age did not, by any means start with atomic bombs. After all, the age of thermal engines did not begin with fire arms which could be seen as a sort of cylinder from which a piston represented by a shell or a bullet is forced out by the expanding gas. A single-stroke engine of this type, I repeat, was not the beginning of thermal power generation technology, although it did lead Leibniz, Huygens and Papin to consider the possibility of a commercial engine operating by the conversion of the pressure of gas or steam into mechanical work. The reactors which produced the first plutonium for atomic bombs were the realisation of a physical scheme which, upon transformation, has provided the basis for. the utilisation of nuclear reactors in the interests of power engineering per se. This transformation was a far-reaching one, although not quite as revolutionary and, of course, not nearly as prolonged as the one that bridged the gap between fire arms and the thermal engine. Two processes, essentially, occurred in the reactors producing plutonium for atomic bombs. The first was one of fission of uranium-235 nuclei. To maintain that process and enable it to become a chain reaction, to prevent reduction in the number of neutrons released by fission and triggering off fission in other uranium-235 nuclei, it was necessary---as has been pointed out---to slow down the neutrons. This moderation process, while preventing trapping of too many neutrons by uranium-238 nuclei, failed to eliminate the phenomenon completely. Neutron moderation was the second (and the first in terms of the technical problem involved) essential process occurring in the reactor. In the long run, neutrons trapped by uranium-238 nuclei transformed that element into plutonium.

Let us suppose that the plutonium obtained in the reactor is also used in it: it replaces used-up nuclear fuel, undergoes the process of fission, and releases fresh neutrons which partly maintain the chain reaction by bombarding plutonium nuclei, and partly hit the nuclei of

uranium-238, thereby eventually transforming them into plutonium nuclei.

At this point, we approach a scheme which, when applied on a mass scale, will revolutionise power engineering. The crux of the problem is the number of neutrons in excess to that required to maintain the chain reaction, those which create new nuclear fuel. Plutonium has been obtained before---as the basic product of reactors which produced the charges of atomic bombs. However, it did not return to the reactor, to replenish its stock of nuclear fuel, it was not a fuel involved in a controlled reaction, nor was it a source of energy for a nuclear power plant.

The fission of plutonium was not a controlled chain reaction going on at a constant rate: it was an explosion. This is a complete parallel of a firearm, in which the piston is forced out in a single act, and with a thermal piston engine, in which the piston reciprocates to produce repetitive expansion of steam or gas.

A few more words to continue the parallel. An exploding plutonium bomb is a power generating fast neutron reactor, i.e., one producing no atomic fuel but only power. Obviously, this definition---a bomb is a one-time only reactor---is just as subject to reservation as the definition of a gun as a thermal engine. Can this ``engine'' be transformed into a controlled reactor producing energy at a constant rate for use in production? Is an atomic power plant practicable which does not utilise moderated neutrons?

The reader will remember that neutron moderation is necessary to maintain a chain reaction. Unmoderated neutrons formed in natural uranium through the fission of uranium-235 nuclei would hit, and become trapped by the much more numerous nuclei of uranium-238 without fission or formation of fresh neutrons. However, where a reactor is charged exclusively or predominantly with uraniurn-235, the situation is different. In this case, fast neutrons have no uranium-238 nuclei to bombard: the latter are either absent from the reactor core or present in a very small number. The chain reaction goes on. Also, the rate of neutron reproduction---the probable number of neutrons formed by fission triggered off by a single neu-

122

PHILOSOPHY OF OPTIMISM

123

PART TWO. SCIENCE IN THE YEAR 2000

tron---is much greater than with slow thermal neutrons. Yet a controllable reaction does not require fast multiplication of neutrons or the fission of a number of atomic nuclei, that grows in geometrical progression. The excess neutrons are enough to make up for the various losses, e.g., absorption of neutrons by the materials of which the reactor equipment is made, heat carriers, etc., and to permit a part of the neutrons to find their way from the core into the natural uranium surrounding the uranium-235, thereby transforming the predominant uranium-238 into uranium-239 which will in its turn be transformed first into neptunium and then into plutonium. The plutonium thus obtained will replace the uranium-235 in the core. Accordingly, the reactor will be able to operate without an outside supply of fresh nuclear fuel---- fissionable materials. Furthermore, the number of fresh plutonium nuclei can be made greater than the number of split nuclei of uranium-235 or plutonium, i.e., the reactor can be made to produce more nuclear fuel than it consumes. For example, each two split plutonium nuclei can be made to produce three new plutonium nuclei from the nuclei of uranium-238. We will soon return to this feature of power breeding reactors.

The reactor in question is termed a fast breeder and it probably holds the promise of the future. But it is no more than the promise of the future: at the present point, it cannot effectively compete with slow reactors which offer certain advantages. A fast reactor has a very small core in which the nuclear fission and heat release occur. Transmission of this heat is made a tricky problem precisely by the small size of the core. A slow reactor is free from this shortcoming; accordingly, heat transmission is less of a problem and costs less. On the other hand, it has fewer neutrons to trigger off fission, fewer new neutrons, and their balance is such as to make it impossible to produce more nuclear fuel from uranium-238 than has been used. Even so, increased reproduction of nuclear fuel is possible here. Thorium, which occurs in the Earth's crust more frequently than uranium, has long been known to trap neutrons, thereby becoming transformed into an isotope of uranium with 233 nuclear particles. A component

of natural uranium, uranium-233 is much rarer than uranium-235. This isotope and the possibility of producing it from thorium assume special importance because of the following circumstance. Like plutonium and uranium-235, uranium-233 will split under neutron bombardment, and is, therefore, another source of nuclear fuel. The neutrons formed by the fission of uranium-233 are comparatively many; in any case they are sufficient to trigger off an increased reproduction of nuclear fuel, even with slow neutrons. A core of uranium-235 may be surrounded with thorium, in which case neutrons bombarding it will produce uranium-233.

The above scheme, based on the use of the more common thorium instead of uranium and on the production in the reactor of more nuclear fuel than it consumes, is of paramount importance in solving the energy problem. But before we consider it, let us go back to some ideas on the nature of technological progress in the atomic age advanced in the first essay. They concern the fact that change in structures and technological diagrams is accompanied by changes in ideal physical cycles which technological progress seeks to achieve in the greatest possible degree.

The evolution of reactors reflects just that ideal. The purely technical progress, i.e., a more complete technological realisation of each physical scheme chosen (choice of a new reactor design, new moderator, new heat carrier) is paralleled by a change in the ideal physical scheme itself. Going from reactors operating with an outside supply of fuel to power breeders is precisely that kind of change. This changeover involves a novel physical scheme, technical progress is transformed into scientific and technological progress, producing not merely fresh technical knowhow but knowledge of the laws of nuclear reactions, widening the technological limits that depend on the physical scheme. Technological progress in this case consists in a closer approach to the ideal cycle, which, in turn, changes and becomes replaced by another ideal cycle.

It has already been mentioned that changes of this kind in the ideal cycle have been known to classical physics. At that time, however, emergence of new trends in technological progress involving novel ideal physical schemes

124

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

125

was of a sporadic nature. Physical schemes were established to last a century, sometimes longer, seldom less. Those were schemes of classical physics. The atomic age, on the other hand, has seen, within the life-span of a single generation, not just structures, but ideal physical schemes to grow obsolete. The scheme for atomic energy release at the expense of nuclear fuel obtained by the fission of isotopes has not yet fully been translated into accepted practice, when the reaction of neutron trapping and subsequent transformation of uranium-238 nuclei became virtually practicable, only to be supplanted in that role by a more sophisticated set of reactions to achieve a reproduction of nuclear fuel, with the thermonuclear reaction looming in the distance as a source of energy. Each fresh link becomes the point of departure---if not for an economic upheaval, at least for an economic forecast: planning agencies are increasingly staffed not only by applied physicists but also by experimenters and theoreticians in the realm of pure physics, and the ``purer'' and the more abstract it gets, the more fundamental---although more indefinite---the shifts this area promises.

We have already had occasion to discuss this hierarchy of increasingly more general scientific concepts and of related modern prognostications that become increasingly more far-reaching and indefinite. Power breeder reactors occupy a middle rung in this hierarchy. A very accurate estimate is possible of the qualitative effect of these reactors becoming the basic component of nuclear power engineering.

A physico-technological prognostication in terms of the practicability of power breeder reactors permits a correct estimate of economic forecasts suggested by an analysis of the industrial project designs of late 1960s and early 1970s. This forecast points to nuclear power becoming, by the early 21st century, the predominant source of eflectric power. The prospect of transition to power breeder reactors means that this development will bring us another step closer to a decisive preponderance of nuclear power energetics. Apparently, power breeder reactors will spearhead the advance of nuclear power energetics until such time when thermonuclear fusion eliminates the problem of

the limited supply and exhaustibility of available energy.

We will now consider this next higher stage in nuclear power generation technology. This stage cannot as yet provide the basis for forecasts of the same degree of accuracy that is possible with nuclear power engineering making use of the fission of heavy nuclei. Here, we come up against the relationship of "thorough-going v. definite": the more revolutionary the technological and economic transformation forecast, the less definite the prognostication in terms of specific results and deadlines. Thermonuclear power promises a more profound transformation of power generation technology and a more far-reaching repercussion for classical power generation technology, the nature of human labour and manufacturing technology than does the fission of heavy nuclei. This is an essentially new physical scheme which differs from all methods of utilising heavy elements more than they do among themselves. Utilising about ten times as much of the inner energy of particles as does nuclear power generation technology discussed earlier, thermonuclear power generation technology has its isource in the fusion of very light nuclei and not in the fission of heavy uranium and plutonium nuclei. Mention was made earlier that mass defect related to the degree of particle density in the nucleus grows rapidly at the beginning of the Periodic Table. Obviously, the hydrogen nucleus consisting of a single particle---a proton---has zero mass defect, but the next heavier nuclei--- those consisting of two, three and more nuoleons---exhibit some mass defect. For this reason, the fusion of light nuclei into heavier ones releases energy. Stellar energy is maintained precisely by this kind of reaction: as stars radiate their energy into space the lost energy is made up for by light nuclei formed by the fusion of hydrogen.

The following fusion reaction offers the greatest promise. Let us say we have a group of deiterium nuclei---an isotope of hydrogen mentioned above, in which each nucleus consists of two particles---a proton and a neutron. There is another isotope of hydrogen with mass number 3---one proton and two neutrons---termed tritium. Tritium has a slightly greater mass defect per particle, i.e., specific mass defect, than deiterium. Where a deiterium nucleus (one

126

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

127

proton and one neutron) collides with another deiterium nucleus (another proton and neutron), the result may be a tritium nucleus (one proton and two neutrons) and one nucleus of natural hydrogen (one proton). The fusion of two deiterium nuclei may also produce a nucleus of the isotope of helium, mass number 3---two protons and one neutron--- and one free neutron.

To achieve fusion, however, nuclei must be allowed to approach one another within a distance equal to their linear size. Now, nuclei---those of deiterium in our example--- carry equal electric charges and are mutually repulsive. The force of repulsion may be overcome where the nuclei have sufficient kinetic energy corresponding to a temperature of the order of one hundred million degrees. It is in virtue of this that the fusion of light nuclei is termed thermonuclear fusion. In a hydrogen bomb, the explosion of the plutonium or uranium-235 is used initially to obtain the required temperature to start a thermonuclear reaction. The most thoroughgoing conceivable energy revolution based on known principles of physics involves the use of thermonuclear fusion.

Formation of helium nuclei by the fusion of deiterium nuclei becomes intensive at several million degrees, and temperatures in the range of several hundred million degrees are required to give fusion practical utility as a source of significant energy supply. In that temperature range, all matter is transformed into plasma, i.e., an agglomeration of free electrons and atoms which have lost their electron shells. Outer atom shells start losing their electrons at several thousand degrees. Electrons thus freed balance the positive charge of the nucleus in a neutral atom. When electrons are ejected, atoms are transformed into ions or become ionised. With the further rise of the temperature, the proportion of ions and electrons rises too, while that of neutral atoms fails. At 20,000°G to 30,000°C, plasma contains practically no neutral atoms. Further increase in temperature has the effect of atoms ejecting progressively their innermost electrons. The atoms of heavy elements comprising dozens---sometimes up to a hundred--- of electrons become fully ionised when the temperature reaches millions or dozens of millions of degrees.

Thermonuclear reactions occur in plasma, for example, in stars, which are plasma formations. However, in laboratory or industrial facilities, plasma will, apparently, have to be encased in a vessel. It is here that the cardinal problem lies. Medieval Europe engaged in a scholastic argument about an imaginary universal solvent. Since it was believed to be capable of solving anything, the question was: how do you keep it? A similar question---although it is far from being scholastic---arises in connection with plasma: any vessel containing plasma will evaporate becoming, moreover, transformed into an agglomeration of ionised atoms' and electrons. The solution of the problem might be along the following lines. Plasma completely surrounded by magnetic field lines would be suspended in vacuum and, rather than be in contact with the vessel walls, will be concentrated in a limited space surrounded by vacuum. When electric current is applied to the plasma contained in a vacuum tube, the magnetic field will prevent the latter from contacting the tube walls, resulting in a thin plasma filament inside the tube. Plasma can be thermally insulated by outside magnetic fields which do not involve electric current passing through such plasma. The problem with a plasma filament is that it is unstable, changing its shape in a millionth of a second when touching the walls of the tube. A plasma clot in a trap formed by outside magnetic fields has proved to have just as little

stability.

To maintain plasma concentrated and pinched by magnetic fields for at least a fraction of a second is the principal problem whose solution may open the road to thermonuclear power. So far, Man has succeeded in maintaining a highly rarefied plasma of several million degrees centigrade in a magnetic trap for just a hundredth of a second. This is a fundamental achievement: it makes highly probable a controlled thermonuclear reaction within the next few decades. Even if a re-structured power balance built around thermonuclear power generation cannot be planned for the end of the 20th century, it can be expected to become a reality in the first half of the 21st century. Although this forecast does not affect the choice of a technological policy for today, it does affect present-day scien-

128

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

129

tific experimentation. This latter point merits some analysis.

Generally, the outcome of an experiment is unknown to the experimenter: were it known for certain---or, in other words, if its probability were equal to 1---there would be no point in undertaking the experiment. In this sense, A. Baikov, when asked about the expected results of his experiment, was perfectly right to reply: "Only unexpected results are of any value to science." On the other hand, where a certain result is known in advance to be impossible, i.e., where its probability is zero, the experiment is also meaningless: a probability of nil is tantamount to a result known in advance to be negative.

The prospective application of means and endeavour in experimental work is determined by the probability of a certain result and its probable effect. However, that is not the whole answer. For apart from the result of an experiment, the latter has a "resonance effect", whatever its actual result. Depending on the degree of originality of the research methods used, on the Einsteinian "inner perfection", on the nature of the initial concepts sought to be tested, and on the generality of the problem to be solved, an experiment may have an effect---one that will vary with aill these conditions---on allied and remotely related fields of research and practice. This effect is exemplified by the evolution of classical power generation technology in the coming decades.

It would be wrong to say that the effect of atomic power engineering on the conventional power technology consists solely in the progressive replacement of the latter by the former. Another and more complex process is occurring alongside with, and partly, inspite of such replacement. The resonance effect of atomic power generation technology intensifies intrinsic, immanent trends in other areas. Specifically, atomic power contributes to the " intrinsic frequencies" and trends in the practices of classical power generation.

The search for new classical cycles affording greatly increased efficiencies is probably stimulated by the prospect of a cheaper kilowatt-hour obtainable at an atomic power plant. That, however, is a secondary effect atomic

power has on the progress of scientific, technological and economic endeavour in classical power research. Essentially, not only does atomic power force conventional power generation technology to raise its efficiency or succumb to competition---it also supplies the latter with new physical and technological approaches. These appear as forecasts: in some instances, these physical and technological approaches have not yet been translated into realities, but they make felt their accelerative effect on conventional power generation practices. Indeed, direct generation of electric power from the thermal energy of gas---a primary trend in conventional power technology--- is based on plasma, the state of matter which at different temperatures provides the milieu for thermonuclear reactions.

The direct conversion of the thermal energy of gas into electric power follows this pattern. The starting point is gas heated to a relatively high temperature, ionised and consisting in part---perhaps a large part---of atoms less their outer electrons and of these freed electrons. This is plasma. The plasma in question, however, is not the million-degree, stellar variety, but rather a low-temperature plasma of several thousand degrees. Since at such temperature range gas ionisation and electric conductivity are limited, certain gaseous metals whose atoms readily lose their outer electrons are introduced in the gas. The result is a highly ionised and electrically conductive plasma stream. The stream is channelled through a nozzle into an exhausted space and thence through a magnetic field, where the positive and negative plasma components gravitate in opposite directions generating electric current in the plasma.

In this mechanism, ionised gas functions as a rotor in a conventional generator which induces electricity as it rotates. The current thus produced is applied to electrodes connected to an outside power consumer. The electrodes of the mechanism serve as conventional generator collectors receiving electric current from the rotor windings.

A magnetohydrodynamic generator of the above design can operate both with conventional heat sources and with

9-01545

u

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR irniu

i.r)0

energy supplied by an atomic reactor. The gaseous mixture---say, of helium and an addition of readily ionisable gaseous cesium---may function as a heat carrier whereby heat energy is withdrawn from the reactor, while this energy partially converted first into the kinetic energy of a hot stream and then into that of electric current in a magnetohydrodynamic generator, transforms the reactor into an atomic power plant.

The combination ot a reactor and a magnetohydrodynamic generator necessitates reactor operation at high temperatures: a low-temperature gas does not afford high efficiency in a magnetohydrodynamic generator. Thus, atomic power not only affects the choice of lines of development in conventional power generation technology, but has a reverse effect as well. Atomic power generation provides its conventional counterpart with economic stimuli. The price of admission into the atomic age is lower cost per kilowatt-hour, the reference standard being the unit costs of atomic power plants. Also, atomic power generation transfers some of its essential research findings relative to plasma to the conventional power industry, with the required step-down from high- to low-temperature plasma (this, of course, is unrelated to the main problems: in lowtemperature plasma, the problem of the magnetic trap and stable pinched plasma does not arise).

On the other hand, conventional power generation technology offers atomic power plants a more economically profitable "conventional component"---a scheme for the utilisation of nuclear reactor heat by converting the latter into electrical energy.

trend in present-day scientific thinking, is also related to atomic power engineering: the processes terminating in nuclear fission and fusion are meaningful only in quantum terms. Quantum physics and the resonance effects of atomic power generation are also connected by a link that is at least as close as and, possibly, even more obvious than in the above case, for quantum electronics provides the basic guideline in the restructuring of the prognostication for industrial and communications technologies for the year 2000. That date, as was pointed out earlier, is seen here as a symbol for a certain complex of interrelated scientific, economic and cultural forecasts. With respect to power generation, the forecast points to atomic power plants becoming the basic component of the energy balance. In terms of industrial and communications technologies, the complex of the year 2000 proceeds from the assumption that electronics will become the springboard for the above transformations. This analysis is based not only on trends in physical theory, but also on the potential of physical experiment, which will make itself felt in the 1970s and 1980s.

It is from this point of view that I propose to discuss quantum electronics.

Maxwell's discovery of the identity of light and electromagnetic oscillations was followed by the discovery of several types of emissions of varying frequencies. Minimumfrequency emissions are used in radio signal transmission. A much greater frequency---with a consequently shorter wavelength---is found in thermal and infrared rays, and a still greater frequency in visible rays, or light in the narrower sense, which spans the visible spectrum from the maximum frequency of violet rays to the minimum frequency of red rays. Emissions of a greater frequency than violet light---termed ultraviolet---are not visible to the human eye. X-ray emissions have a still shorter wavelength and greater frequency. Finally, at the top of the high-- frequency spectrum of electromagnetic emission are gammarays emitted, among others, by atomic nuclei in certain nuclear reactions.

In 1900, Planck made the discovery that electromagnetic waves are emitted in discrete bursts of a minimum

9*

QUANTUM ELECTRONICS

I shall now consider the scientific and technical trends which call for a more extended and more clearly defined account than our earlier outline of the latter half of the 20th century. Intricately connected with the relativity theory, atomic power generation per se can provide the reason for describing the currently emerging civilisation as relativistic. Quantum physics, another fundamental

132

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YKAR 2000

133

quantity. Emission energy cannot increase in infinitely small increments: they are at all times a multiple of a minimum quantity termed a quantum. Planck, however, did not go so far as to view the magnetic field as consisting of discrete particles: his assumption was that this field is emitted in minimum indivisible quantities---quanta---and electromagnetic waves are absorbed also in identical quanta. It does not follow from this, however, that the magnetic field itself consists of indivisible particles. In the words of Phillip Frank, "even though beer is always sold in pint bottles, it does not follow that beer consists of indivisible portions"/^^1^^" The highly paradoxical notion of the discrete quality of the electromagnetic field was advanced in 1905 by Einstein. Indeed, the Einsteinian hypothesis embodied, in embryo form, the most paradoxical concept of non-- classical physics: light, which may be described as waves in a continuous medium---and that is proved by light interference, with light disappearing at points in the spectrum where wave crests in one light ray coincide with wave troughs in another ray, and with increased luminous intensity where crests coincide in both rays---is, at the same time, a plurality of discrete particles. Termed by Einstein light quanta, these particles subsequently came to be known as photons. Proceeding from the corpuscular theory of photons to the concept of continuous electromagnetic oscillations, one will observe that the energy of a photon is the function of, and proportional to, the oscillation frequency.

Shortly after, the quantum theory of light evolved to a point where it came in contact with the atomic theory. In 1915, Bohr created an atomic model wherein the nucleus is surrounded by revolving electrons which cause atomic emission of electromagnetic waves of a specific frequency, i.e. photons of a specific energy level, as they jump from orbit to orbit.

The minimum energy level corresponds to the innermost orbit, energy levels growing with each increasingly more distant orbit. Where an atom absorbs light (any electromagnetic emission, rather than visible light alone) electrons jump to greater-energy orbits, with sympathetically

greater atomic energy due to the absorbed photons. On the other hand, where an atom emits photons, electrons jump to lower-energy orbits, reducing the atomic energy. Emission energy, in other words the frequency of the emission, provides a test of what is occurring in the atom. Emission frequencies form an emission spectrum.

A wealth of data was accumulated in the early 20th century on the radiation spectra of atoms of various elements. In 1913 Bohr brought together the many observations into a unified system of discrete atomic radiation. The concept of discrete levels of radiation and, consequently, of a discrete hierarchy of electronic orbits in which only interorbital jumps are possible, could have come from nothingshort of the intuition of a genius. That is precisely how Einstein evaluated Bohr's model in his autobiographical notes. Orbital quantification, i.e. the identification of discrete ``permissible'' orbits and discrete energy levels, could not have been deduced from a more igeneral concept. Said Einstein: "That this insecure and contradictory foundation was sufficient to enable a man of Bohr's unique instinct and tact to discover the major laws of the spectral lines and of the electron shells of the atoms together with their significance for chemistry appeared to me like a miracle---and appears to me like a miracle even today. This is the highest form of musicality in the sphere of thought.""'

In the twenties the validity of the concept of discrete atomic energy levels and discrete electronic orbits was demonstrated with rigorous objectivity. Beginning with the corpuscular properties of the magnetic field discovered by Einstein in 1905, we find ourselves, by 1923-1924, with the de Broglie wavelength of particles, particularly as applied to electrons. Shortly after, in 1926, Schrodinger proposed an equation in which a certain variable called "wave function" constantly changes as it passes from one point in space to another and from one moment in time to another ---similarly to water in a rolling sea, to the density of air through which a sound is travelling, or to the intensity of a magnetic field---all varying from point to point both in space and time. The Schrodinger equation, however.

* A. Einstein, op. cit, pp. 45-47.

Ph. Frank, Einstein. His Life and Times, New York, 1947, p. 71.

134

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

135

does not describe the propagation of motion or of deformation in a medium. What it does describe is the motion of an electron or of some other particle of matter. The question "What is the meaning of the continuous variable---wave function---in the corpuscular picture of moving particles?"---was answered by Marx Born who interpreted wave function as a measure of the probability of finding an electron in a given point at a given moment. This function is a variable value, with different oscillation amplitudes for each point and each moment obtainable from the Schrodinger equation. The amplitude is a measure of the probability that the electron will be in the particular point and at the particular moment to which the amplitude applies.

,

Clearly, this is a revolution in physical thinking. Classical science visualised nature as. being governed by a set of accurate laws which unambiguously describe the location of a particle at any given time. The ideal of scientific research consisted in maximum approximation to absolute accuracy in assigning to a particle a single time and space relationship. Classical science presumed that it was possible to achieve an infinitely close approach to the actual position of a particle and to its impulse at any moment of time. Now the fact is that an actual particle or its impulse cannot be assigned a single definite relationship in time and ,space terms. The actual ideal is not an accurately defined spatial relationship of a particle and its dynamic variables as a whole, but rather an accurately defined probability of such dynamic variables. The search for the classical ideal produced novel scientific concepts and was translated into practical applications which, in turn, triggered off greater scientific progress. Today, the search for the non-classical ideal is bringing forth new conceptions of space, time, motion, matter, and the evolution of the Universe and life. These conceptions are implemented in new aspects of technological progress which, in their turn, become the motive force behind scientific advance.

An illustration will make clear the above proposition. Nuclear physics considers a situation in which a particle, to be able to approach an atomic nucleus, must overcome

a strong force of repulsion, a potential barrier greater than its own kinetic energy. This is no more possible than it would be for a ball rolling down a slope to go up and roll over a higher eilevation. It is impossible---in classical physics, that is. In quantum physics, which deals with probabilities, the impossibility becomes a low probability of a particle penetrating an atomic nucleus. Yet, given bombardment by a large number of particles, such low-- probability instances of penetration will occur, triggering off nuclear reactions which are so important to the new, nonclassical technology. A similar development is observed in other areas, too. Technically applicable processes---- particularly of the kind that may, in principle but in a way that is not yet clear, prove to have future practical utility ---cannot be discovered unless and until the non-classical quantum concepts are applied.

In the 1940s, radio engineering reached a point where it faced probilems insoluble other than by the application of quantum concepts. Reception in a conventional radio set is complicated by frequency variations, by station interference, and, in general, by the fact that a radio set has a wide "spectral line", i.e. too great a frequency band.

Many radio applications---including those which hold the greatest and most important promise for the future--- require a very narrow frequency band so as to eliminate station interference in simultaneous transmission. Spectroscopy offers the concept of monochromatic light having a very narrow spectral band. Today, spectroscopic concepts have made inroads into radio engineering, demonstrating that new methods of producing monochromatic and stablefrequency oscillations must utilise concepts of quantum physics. Great frequency intervals derive from the macroscopic nature of radio receivers. This macroscopic quality of radio engineering eclipses the discrete nature of emissions and the existence of quanta that make up the electromagnetic field. The 1950s marked an approach to very weak signals---stable-frequency waves characterised by low wave dispersion. A way was found to amplify them. The theory of such signals, however, was found to go beyond the limits of classical physics, falling within

136

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

137

the new concepts of the generation of coherent quantum emissions characterised both by the same frequency and the same phase at any given moment.

The key to monochromatic and coherent emission was provided by instruments developed in the mid-1950s, which utilised induced transitions of atomic systems from one energy ilevdl to another. The underlying theory of these transitions is the quantum theory deriving from Bohr's model, and the transitions themselves were predicted by Einstein. This is so-called induced emission. In 1916, Einstein published a paper entitled "Emission and Absorption of Radiation in Quantum Theory",* an expose of the quantum system, i.e. a system of particles undergoing changes of structure by emitting and absorbing irradiated quanta. A quantum system is exemplified by an atom consisting of a nucleus and electrons with two energy levels. These levels can be visualised as. two electron orbits, of which one is dloser to the nucleus---the lower level---and the other is farther removed from it---the upper level. Of course, the model could be that of a molecule, too, which has a higher or lower energy level according to the disposition of the atoms. For our present purposes, however, we will consider an atom, not a molecule, as an example of radiation.

The transition of an electron from one energy level to another may either be spontaneous or triggered off by radiation, a photon flux. The correlation of an atom with radiation is twofold. In one instance, a photon is merely absorbed by the atom, in the other the atom emits a new photon. In 1927, Dirack noticed that the new photon is indistinguishable from the old: it has the same energy and travels in the same direction. Where a large number of electrons are maintained in the upper level they will jump simultaneously to a lower level, emitting photons in the same energy range and the same direction, but in a greater quantity than in the case of incoming radiation. This permits radiation to be boosted by induced quantum ra-

diation. The phenomenon has been utilised in lasers (the name stands for "light amplification by stimulated emission of radiation").

Lasers are instruments utilising induced radiation in the optical spectrum. In this they differ from masers which utilise the same radiation in the radio spectrum and are a distinct branch of quantum electronics. The utilisation of induced radiation in the optical spectrum is more closely and more obviously related to non-classical physics and quantum concepts than it is in the radio spectrum.

Just what are the peculiar properties of this light beam boosted by induced radiation that permits it to be regarded as the heraild of a new epoch in scientific and technological progress?

Primarily, it is the narrow frequency band, the highly monochromatic nature of radiation. Secondly, it is coherence, i.e. the fact that the positively induced radiation of different atoms occurs in a coordinate fashion, in one phase. A laser produces a powerful emission of monochromatic radiation. High concentration is a property of the laser beam, i.e. it does not expand, or rather has a very low rate of expansion. A laser produces a very powerful beam of highly concentrated radiation.

This book is a stage-by-stage discussion of the conception of initial conditions and noozones, considered from different angles. The general trend in non-classical science and its applied uses consists in going beyond the limits of the classical hierarchy of discrete particles of matter. This classical hierarchy used to terminate, on the one hand, in atoms which could be variously grouped to produce the existing variety of chemical elements, and, on the other hand, in the bodies of celestial mechanics. The noozone in the hierarchy was represented by molecular restructuring on a macroscopic scale, i.e. chemical processes and the motions of macroscopic bodies in the narrow stratum of the lithosphere, hydrosphere and atmosphere. This stratum is the home of what V. I. Vernadsky termed noosphere---a complex of organised macroscopic structures corresponding to a very narrow range of atomic, molecular and macroscopic processes. This range was limited to small energy concentrations.

* Deutsche physikalische Gescllschaft, Verhandlungen, 1916, Vol. 18, pp. 318-23; A. Einstein, Collected Works, Vol. 3, pp. 386- 92 (in Russian).

138

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

139

The noozone of the hierarchy of discrete particles of matter was complemented by the noozone of continuous processes---a complex of expediently organised hydrodynamic processes, thermal flows and other instances of energy transference (e.g., electric current), including radio signals, acoustic and optical phenomena. In this field, atomic concepts---a statement of the reality of atoms and molecules---were not essential to a purposive organisation of macroscopic processes. Atomistic concepts did not play the role of an end goal determining the choice of particular initial conditions. On the other hand, specific concepts determining the noozones of a discrete hierarchy did not cover continuous concepts. Thus, purposive organisation of discrete bodies could disregard the concept of a continuous world, just as an organisation of continuous processes can ignore the atomistic concept.

The classical hierarchy of discrete particles of matter terminates with an atomic concept in which the discrete components of the atom display obvious and essential wave-like, continuous properties. The classical hierarchy of continuous processes terminates at a point where radiation is found to possess obvious and essential corpuscular properties. Non-classical zones are, on the one hand, found in the ultramicroscopic niches of the classical hierarchy and, on the other, enclose that hierarchy: no scientific account of the birth or death of stars and galaxies would be possible without non-classicail concepts.

In the scientific and technologicall revolution of the 20th century the noozones transcended the limits of the classical hierarchy. The noozones of the discrete world and those of the continuous world last their independence. The corpuscle and wave dualism became both a physical and a physico-technical concept. The non-classical model, in which corpuscular and wave properties are indivisible, graduated from being a mere account of the actual world to the status of a teleological model determining the conscious action of reason on objective events. The record leading up to, and the prehistory of, atomic power generation and quantum electronics make it clear that none of the main trends in the scientific and technological revolution of today would be feasible without prior, essentially non-

classical, models---a model of nuclear fission or fusion in one case, and that of quantum energy transference to a higher level and induced radiation of photons of equal energy, in the other. Where the aim---opposition of conscious effort to elemental processes---can be defined only in terms of an essentially non-classical model, the latter acquires both physical and physico-technical meaning, not just stating what is but also describing what should be. thus entering a realm in which the notions of ``better'' and ``worse'', of optimum and optimism, are applicable.

Atomic fission, a complex of processes providing an explanation of this phenomenon, is a non-classical zone, an extension of the discrete hierarchy in which discrete concepts can no longer be entertained unless they are combined with wave concepts. Induced radiation belongs to the non-classical zone of the hierarchy of concepts of continuity, wherein an account of the world in terms of waves would be untenable unless combined with a discrete, corpuscular aspect.

It should further be noted that quantum electronics---the electromagnetic spectrum noozone---provides a link between the new ideal physical approaches, all-embracing, general and non-dlassical, and accelerated scientific and technological progress.

Let us consider the above-mentioned properties of the laser beam in terms of practical application (other properties left out of account here may be just as important) We shall try to draw a line between the effect of laser uses on, first, accelerated technological progress, and, secondly, on the increase in this acceleration, which is as yet indefinite and unquantifiable.

The laser can work a radical change in communications and information transmission. Radio communications have long been increasingly going from long to short wave techniques, permitting a single channel to carry more telephone, radio, TV or other messages. Message capacity increases rapidly by using optic spectrum waves instead of the much longer centimetre wave band.

Again, the lasers of the future will permit a much higher efficiency in computer and control equipment---another information medium. Given high-speed information trans-

140

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

ill

mission between computer components provided by lasers, sophisticated computers will take a step forward in efficiency and speed.

The transfer of laser technology to industrial applications is expected to be one of the most important technological trends of the last quarter of this century. The mechanical methods of treating metals and other materials will probably be superseded by laser techniques: a concentrated, monochromatic yet powerful laser beam can handle micrometric work with the utmost precision. Again, quantum electronics opens up the possibility of working profound changes in the molecular structure of crystal lattices and in the atomic structure of molecules, producing superhard parts and surfaces. For these reasons, improvements in the laser initiate a far-reaching reconstruction of every basic technology. Quantum electronics is in the early stages of the realisation of the ideal physical scheme, deriving from the Einsteinian idea of 1916 and brought to fruition in the mid-century. It may reasonably be suggested that the remaining decades of this century will see the emergence of lasers capable of converting the energy of a wide range of scattered sources into coherent electromagnetic wave fluxes concentrated to any desired degree of power. Developments will include a broader spectrum of laser radiation and laser designs operating in new ranges. Once it reaches a sufficiently high power level, the laser beam may well replace metal wires as an electric power transmission vehicle.

Now let us take a look at the feedback between scientific progress and the technological progress generated thereby. Quantum electronics reflects the powerful and far-reaching impact of the technological application of physical concepts on the development of the concepts themselves. Lasers may provide an effective vehicle for fundamental research experiments. The fact that in quantum electronics experimental equipment is not too far removed from that in actual commercial use makes the field in question the recipient of a greater sum of mankind's intellectual effort and material investment, in the final analysis bringing closer the solution of the fundamental problems involved. This solution, as will become

from a subsequent essay discussing these problems, involves an increasingly more accurate measurement of intervals in ultramicroscopic and cosmic space and time. Laser radiation brings a very high degree of accuracy to space and time measurement. It is within the realm of possibility that such measurements will cast a new light on the structure of the Universe and on processes occurring in ultramicroscopic---perhaps in minimal, further indivisible--- time and space units.

Quantum electronics is a part of a more general trend in present-day scientific progress. Modern science is making increasingly deeper inroads into fluxes of all sorts of particles which it sees as quanta of variously constituted fields. As far back as 1905, an electromagnetic field was found to be a photon flux. Twenty years later, de Broglie, as was indicated earlier, discovered that electrons, being discrete particles, exhibit wave properties governed by laws some of which could be established by treating electrons as concentrations of oscillations of a field that is not electromagnetic. Like all other waves, de Broglie's waves are subject to interference: luminous intensity is greater where two wave crests on the screen coincide, and equals zero where a wave crest coincides with the trough of another wave: such points form a dark interference band on the screen. Waves are subject to diffraction---a change in the wave front when they pass near the edge of a flux-- obstructing body or the deflection of a wave from its direct course after passing through a narrow aperture. The dual corpuscle and wave nature of the electron was employed in the electron microscope to permit to see structures and behaviour of matter and processes invisible to an optical microscope.

Later, discovery was made of many other elementary particles of various kinds apart from those already mentioned, and of a correspondingly large number of wave fields made up of such particles. Fluxes of these particles are employed in modern technological and medical applications, in experimental investigations into the structure of atomic nuclei, atoms, molecules and cells; astronomy and astrophysics explore these radiations as a key to un-

142

PHILOSOPHY OF OPTIMISM

derstanding the structure and evolution of stars, galaxies and the Metagalaxy.

The investigation and utilisation of powerful radiations of various nature, along with the employment of nuclear chain reactions, can be regarded as the main trend of scientific and technological progress in the atomic age.

MOLECULAR BIOLOGY

The discoveries of the fifties and the sixties which led to the emergence of quantum electronics were paralleled by a revolution in the biological science, a revolution that was marked by a feature characteristic of the latter half of this century. In classical science, a series of consecutive fundamental discoveries generally inaugurated a relatively unhurried era of applied development deriving from the newily evolved concept, fresh ideas, or novel experimental techniques. Today, a change in a science whose fundamental concepts have been revolutionised by midcentury developments raises more questions than it answesr. In the 19th century, the hope was alive of reaching the underlying ultimate structure of things by increasingly penetrating them. This hope was never universal, but it was there. The latter half of the 20th century will, probably, have to give up even the hope of a long respite in the process of the accumulation of knowledge.

This feature gives modern science its characteristic prognosticative dimension. There was a time when the definition of a turning point in science was represented by a statement such as "We now know. ..". This element of the definition is still valid, yet the emphasis has shifted to something like "We now see what we still have to learn". This shift of emphasis is characteristic of biology, among other sciences, the question under the heading "What do we still have to learn?" being the most general and cardinal inquiry "What is life?" This inquiry comprises thousands of particular questions as to the structure and behaviour of various organisms, tissues, cells and molecules, all of which are explicitly related to the cardinal problem of the underlying essence of life. A host of applied questions are also related to it: the formula "What do we still have to learn?" is extended and particularised into the

PART TWO. SCIENCE IN THE YEAR 2000

143

questions "How do we eradicate cancer?", "How can we significantly---by several decades---stretch the average human life-span?", "How can heredity be controlled?''

To attack these problems (or prognostications, as one might say), several special concepts will have to be explained, briefly and in the most general terms, just as in the previous essays. This book does not claim to be an account of the present state of things in physics, chemistry and biology: its purpose is to try and find an answer to the questions: What are the changes in the fundamental concepts of nature? What do we gain by modifying them? Since these questions are of universal interest, the definitions should be given in popular terms and the account limited to a minimum of technical terminology.

The close relationship between special theoretical---- including physical and mathematical---constructions and experimental data and the cardinal inquiry into the underlying essence of life is a feature that biology has in common with natural philosophy. However, this similarity involves the scope of the questions raised, rather than the ways of the investigation. Modern biology links particular concepts with the general inquiry and general postulates, thereby attaining the "inner perfection" of its own concepts. However, these general postulates are directly or indirectly capable of experimental verification by the test of "external confirmation". The primary question involved here is: which link in the hierarchy of discrete portions of matter has the specific capacity to reproduce living matter of the same structure? This is the property of very large molecules comprising thousands of atoms. Biopolymers and macromolecules, as such molecules are termed, are proteins, i.e. combinations of amino acids, and nucleic acids. There are some very good reasons to ascribe to such large molecules the ability of self-reproduction. The modern science of heredity ascribes this ability to chromosomes---bodies contained in cell nuclei. Chromosomal structure carries the "genetic code"; in other words, this structure determines the structure and destiny of cells derived from the given one and, where the cell is elementary, its chromosomes determine the evolution of the organism. This evolution does not consist in growth alone, as is the

144

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

145

case with, say, crystals. The combination of the very general ideas about the genetic code, which determines the behaviour of billions of cells whose combined destinies are the evolution of the organism, about the number of elements in the structure containing the code of ontogeny and heredity, about the stability of this structure, as well as about certain principles of, and analogies with, quantum mechanics has led to the evolution of the concept of a macromolecule in which the genetic code is embodied in the arrangement of atoms and radicals.

Let us take a look at the account given by Schrodinger in the mid-1940s of the concept of a macromolecule as the carrier of the genetic code.

Schrodinger postulates an analogy between a molecule and a solid body element---the crystal. Molecules in a crystal and atoms in a molecule are held together by forces of a similar nature. Schrodinger emphasises the underlying quantum nature of these forces: they could not be conceived in terms of the continuity of energy and of transition from one configuration of particles to another with an energy that is indefinitely different from that of the former. Schrodinger then goes on to describe a crystal built up from molecules: the molecular structure may be repeated in an increasingly larger number of particles as it expands in each of the three dimensions. Chromosome fibre--- a body carrying hereditary information---is the product of a different development. A molecule can change into an aperiodic solid body, an aperiodic crystal.

Says Schrodinger: "The other way is that of building up a more and more extended aggregate without the dull device of repetition. That is the case of the more and more complicated organic molecule in which every atom, and every group of atoms, plays an individual role, not entirely equivalent to that of many others (as is the case in a periodic structure). We might quite properly call that an aperiodic crystal or solid and express our hypothesis by saying: We believe a gene---or perhaps the whole chromosome fibre---to be an aperiodic solid."*

Schrodinger goes on to explain why a very small particle has the ability to contain an amount of coded information sufficient to determine the evolution of an organism. The reason is to be found in the very large possible number of molecules, each composed of a relatively small number of atoms. It is precisely for this reason that a molecule----an aperiodic crystal allowing for an innumerable variety of combinations of the same atoms---conclusively determines a particular evolution of an organism selected from a myriad of alternative courses. "A well-ordered association of atoms, endowed with sufficient resistivity to keep its order permanently, appears to be the only conceivable material structure, that offers a variety of possible (`isomeric') arrangements, sufficiently large to embody a complicated system of `determinations' within a small spatial boundary."*

Schrodinger draws a parallel with the Morse code: dots and dashes, just two types of isigns in groups of two, three or four, give thirty combinations or letters. However, three different types of signs in groups of ten would give 30,000 combinations, and five types of signs in groups of twentywould produce 372,529,029,846,191,405 combinations.

Schrodinger is emphatic in pointing out the difference between a molecule in which the heredity code is written and a statistical physical ensemble: the chromosome molecules "represent the highest degree of well-ordered atomic association we know of---much higher than the ordinary periodic crystal---in virtue of the individual role every atom and every radical is playing here".**

This is a far cry from statistical physics in which the "individual role of every atom" is negligible and a wellordered pattern may be realised only by an enormous number of individuals coming into play. It will be noted that the "individual role of every atom", to be discussed in the closing essays of the book, is also ruled out in fields other than statistical physics. The role of every atom is negated by any classical statistical concept and, if the characteristic given by Schrodinger is to be generalised, it will suffice to put the word atom in quotes.

* E. Schrodinger, op cit., p. 61. ** Ibid., p. 77.

10 -01545

* E. Schrodinger, What is Life? The Physical Aspect of the Living Cell, Cambridge, 1944, p. 61.

146

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

J4/

It has already been indicated that the genetic secret--- repetition of the structure and behaviour of an organism--- may be, and has already been, partly unravelled by investigation into cellular nuclei. Their structure---the presence in the nucleus of supermolecular bodies, the structure of such bodies and of the molecules constituting them---- provide the point of departure for the modern theory of heredity. The nucleus carries a number of chromosomes that is constant for every variety, these chromosomes consisting largely of a nucleic (deoxyribonucleic) acid. The abbreviated name of that acid, DNA, has become at least as familiar to the general public as the names and symbols of the more common elements of the Periodic Table or of the elementary particles, a development that reflects the fundamental significance of DNA and chromosomes includmg DNA in the control of cellular changes and in the transmission of hereditary traits.

It is chromosomes that embody the genetic code transmitted to the other elements of the cell, in which protein synthesis occurs.

Protein synthesis occurs in extra-nuclear bodies present in the protoplasmic envelope of the nucleus and containing, among other elements, the so-called ribosomes---- molecular-sized particles composed of molecules of another nucleic (ribonucleic) acid (RNA, another well-known abbreviation). Ribosomes can be seen through an electron microscope.

The mysterious processes involved in the transmission of genetic traits encoded in DNA produce daughter cells of the same structure and, which is more significant, having the same type of chromosomes. Prior to cell division. each of the molecules duplicates. The duplication machinery has been worked out in great detail through the use of the electron microscope and in still greater detail by the application of tracer atoms. A chromosome consists largely of DNA molecules and its duplication is the result of DNA molecule duplication. The formation of new DNA molecules, in other words, DNA synthesis follows a matrix pattern. The meaning of this term is this: biological synthesis is characterised by processes in which the composition of the matter being synthesised is the function of an

10*

Schrodinger, however, does not believe the principle of the "individual role of an atom" to be non-physical: ". . .the new principle that is involved is a genuinely physical one: it is, in my opinion, nothing else than the principle of quantum theory over again".*

The above considerations derive from very general postulates, but fundamentally they are experimentally verifiable. The possibility was realised in the 1950s-1960s through the use of electron microscopes discussed earlier and tracer atoms, i.e. radioactive nuclei produced by nuclear reactions including nuclear fission. Tracer atoms are readily detectable by their emission and identifiable when introduced into an organic tissue or tissues. The tracer atom technique permits to follow the migration of various substances in an organism to work out physiological an'd pathological processes down to microscopic cellular events.

In the 1940s and especially in'the two decades that followed, the search for an answer to the question "What is life?" was particularised into the form which indicated both the functions and the structures they characterise. These structures have been mentioned earlier. I now propose to treat the subject more methodically, if that word is right for the very brief and piecemeal remarks that follow.

Living matter consists of cells---units of protoplasm surrounded by a membrane envelope and comprising a nucleus and some other inclusions. It is a rather complicated structure exchanging energy and matter with its environment, duplicating by division into daughter cells of the same structure, differentiating (an elementary cell produces the many cells of a multi-cell organisms), moving, changing its structure and behaviour in response to environmental changes. Protein molecules are synthesised in cells. A cell consists largely of protein and nucleic acid macromolecules of the order of several dozen million per cell. With the advent of the electron microscope, the structure of a cell has been explored in minute detail, down to direct observation of some of the larger molecules.

* E. Schrodinger, op. cit, p. 81.

148

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

149

outside system. This system is comparable to a mould which receives raw materials or provides a vessel for the cooling of such matter. The matrix of DNA synthesis is the DNA molecule itself, which, as it were, selects the necessary atoms from the environment and arranges them to form a new molecule.

DNA molecular duplication accounts for the preservation of the genetic code---the information on the structure and behaviour of living organisms with variations repeated from generation to generation. The question is: How does this information operate? What is the procedure by which it arranges protein, another principal component of living matter along with nucleic acid, into specific combinations? Protein consists of amino acids whose molecules are spatially distributed to provide the entire self-- regulating system of cells, tissues and organs.

The further multiplication and" differentiation of cells, the growth and development of tissues and organs are the function of the time sequence of amino acid synthesis and decay, of their interaction with the environment, and of molecular behaviour.

The structure of cells, tissues, organs and whole organisms, as well as their behaviour, is determined by hereditary genetic information. Just how does this information encoded in DNA determine protein structure and behaviour?

Here we face the problem of what is known as transcription. The use of the term, which generally means the notation of a word in letters of a different language, is valid here since genetic science makes wide use of the concept of Code and of others peculiar to information theory. Specifically, the biochemical meaning of the term `` transcription'' is one of the synthesis of RNA---ribonucleic acids---mentioned eariier. Since DNA per se cannot act as a matrix for the synthesis of amino acids constituting proteins, a series of intermediate process becomes a necessity. A DNA molecule is a matrix for the formation of an RNA molecule, and that is precisely what transcription is---- translation of the genetic code into a different language. The structure of a DNA molecule in which the structure and behaviour of organisms are encoded has a corresponding

RNA molecule which has formed on the former molecule as on a matrix. Such RNA, called matrix RNA, is synthesised in chromosomes.

Matrix RNA does not carry genetic information to proteins, but rather to messenger RNA which determines the synthesis of amino acids---a series of cellular processes occurring outside of the nucleus.

And now we can go back to the macromolecule as the only structure capable of maintaining the genetic code to guarantee the self-reproduction of organisms (Subject to a certain amount of variation. At this point Schrodinger's views can be greatly supplemented. There are, for instance, the ideas put forward by M. V. Volkenstein with respect to an organism consisting of small molecules, each comprising a small number of atoms."' An organism of that description cannot be a liquid or a gaseous body: a nonordered dance of molecules in a liquid, to say nothing of a gas, cannot guarantee the preservation of genetic information. A very different situation is presented by a crystal body with well-ordered molecular behaviour: crystal lattice variations in response to environmetal changes may be sufficiently definite. It would be easy to imagine a crystal ``organism'' reacting quickly to outside stimuli which would produce changes in, say, the conductor and semiconductor properties of crystals. Such reactions might well become conditioned reflexes. Also, a well-ordered low-molecular system consisting of many crystal lattices might be capable of restructuring itself in such a way as to embody the genetic code enabling it to reproduce itself and, more significantly, to undergo evolution toward more perfect crystal organisms. What is meant is obviously a cybernetic robot that is either built of metal or of some other low-molecular materials. A crystal organism, however, could not evolve on its own: such evolution is ilimited to macromolecules. Small molecules are polymerised into large ones, and it is only the latter that can form super-molecular living systems. The evolution of these systems has reached a critical point where self-reproducible and self-perfectible cybernetic systems can be created by

* M. V. Volkenstein, Molecules and Life, Moscow, 1965, pp. 470- 71 (in Russian).

150

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

151

deliberate arrangement of varied, including crystal, nonmacromolecular objects rather than by natural forces.

The above cursory account of the cell, the nucleus, amino acids, nucleic acids and their synthesis is, in many instances, oversimplified and inaccurate. As a case in point, we have viruses: these claim to be living matter without having a cellular structure. Another example are bacterial cells which have no nuclei. Reservations of this kind are many and varied but they do not affect the conclusions we are about to make.

These conclusions have to do with the relationship between modern molecular biology, which has discovered macromolecules to be the repository of the principal functions involved in the self-reproduction and self-control of living matter, and non-classical physics. However/would it be valid to claim the existence of logical relationships between the overall picture of macromolecular interaction, synthesis and re-writing of genetic information on the one hand, and the principles of quantum mechanics, on the other?

Macromolecules, just as small molecules, owe their existence to microprocesses and micro-interactions occurring on the quantum level. The problem of what combines atoms and radicals into molecules could not be solved without reference to energy levels and electron orbits, to their positions and motions, to the wave properties of microscopic particles and the atomic model---in short to quantum concepts. That, however, does not make molecular biology a quantum theory: the mere fact that matter is composed of elementary particles, atoms and molecules--- i.e., quantum systems---is not enough to claim quantum nature for macroscopic events of which an account can forego reference to the wave properties of particles and to the corpuscular properties of radiation. An account of the mechanism by which grains of sand are blown off a dune top and the dune travels does not require reference to the atomic structure of sand grains. However, where the reason for a particular shape of crystalline sand grains is in question, mention of molecular structure, of atomic arrangement and of the properties of quantum systems could not be omitted.

A major, nay, a predominant portion of biological and biochemical processes lend themselves to description without reference to the behaviour of individual mollecules insofar as it relates to their wave properties, or to the specific characteristics of radiation, which are related to their corpuscular nature. As a case in point, the processes involved in chromosome duplication, RNA synthesis and protein molecule sythesis from RNA matrixes do not come under the heading of quantum processes.""

There are certain biological processes, however, that cannot be accounted for and, accordingly, experimentally reproduced, without recourse to quantum concepts. These processes are of special interest for prognostication and the identification of noozones in molecular biology and the mechanism of heredity.

Changes of genetic code in chromosomes caused by quanta of short-wave radiation come under this heading.

Let us consider the principal events occurring in radiation-induced chromosomal change.

In the first place, a quantum of energy may be absorbed by an atom or a group of atoms in a DNA molecule. In that case, the radiation-induced re-grouping of atoms and re-structuring of atomic bonds---a chain of radiochcmical reactions---will either break the radical chain or produce a stable local change in the DNA molecular structure. This structural change is a change of the genetic code, producing new hereditary traits. In other words, absorption of a quantum of energy produces mutation, the sudden emergence of a new characteristic which is transmitted to descendants.

In a second case, instead of producing any changes where it is absorbed, the energy is transmitted through the DNA molecule, causing at some point a local injury represented by a chain of radiochemical reactions and resultant chromosomal alteration and mutation.

In a third case, the effect of an absorbed quantum of energy is indirect: the quantum will affect the molecules of the surrounding tissue rather than the chromosome. Ac-

* H. C. Longuet-Higgins, in Problems of Biophysics, Moscow, 1964; M. V. Volkenstein, Molecules and Life, pp. 472-74.

152

PHILOSOPHY OK OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

153

cordingly, a new active chemicals will be formed in such tissue, which will affect the chromosomes as mutagens, restructuring the genetic code and causing mutation.

Radiation-induced mutations are in most cases harmful: the new hereditary traits impair vital functions and procreation. The chaotic ``entropic'' radiation---the radiation background---is one of the menacing lines in the inscription holding the message for the future civilisation. The other part of the message, the one promising security, prosperity and progress, is related to lower radiation levels. The prognostication for the year 2000 presupposes a progressive diminution of the radiation background by discontinuing nuclear tests, strict controls on atomic energy uses to eliminate higher radiation levels, and special adequate action to lower such levels.

/

However, could radiation genetics be made to use wellordered controlled radiation to provide a constructive technique? The answer is ``Yes'', it has already been used that way: examples are provided by radiation selection, the use of radioactive isotopes and other sources of ionising radiation to increase the number of mutations and to select artificially those of the mutations which are conducive to greater vitality, faster reproduction, and the greater economic value of animals and plants.

On the cell level, the uses of radiation indlude radiotherapy, with cancer treatment as a prime example.

The use of radiation in cancer control became a scientific, as opposed to a purely empirical, method owing to molecular biology. The immediate object of interest to radiotherapy is cellular behaviour, a chain of processes written in DNA molecules. Of the two modern theories of cancer, one sees mutation as a source of the disease, while the other holds viruses responsible for all sorts of malignancies. According to the former theory, the underlying cause of cancer is to be found in changes in the genetic code. The latter theory holds virus penetration to be the point of departure in any unwanted alteration. It is conceivable, however, that the virus affects chromosomes by disturbing their structures, and the altered genetic information determines cellular tendency toward malignant growth. In any case, neither etiology nor cancer therapy

would be thinkable without a notion of chromosomes and of the genetic code.

Furthermore, cancer radiotherapy could not develop into a scientific, as opposed to a purely empirical, discipline without a good idea of the effect of radiations of varying intensities and different molecular composition on radiochemical reactions on the molecular level. It is significant that we are dealing here with essentially quantum processes.

It may be suggested that the nature and scope of the present agricultural and medical uses of radiation are a far cry from what will be achieved in the last quarter of this century. Quantum electronics again comes into play here: it is conceivable that at the close of the 20th century mankind will use quantum electronics and molecular biology to take a major step forward in increased food supplies, more efficient cancer control, and a significantly longer life-span.

George Thomson says that present-day radiation genetics reminds one of an attempt to improve a statue by firins;; a machine-gun at it from a great distance/^^1^^" Indeed, the radiation techniques of today are so much random firing: an effort to produce a large number of results from which the beneficial ones will be subsequently chosen and preserved by selection involving several generations of animals or plants over a comparatively long period of time. We cannot zero in on the chromosome, first, because we lack the necessary sighting equipment---radiation affects a large body of tissue---and, secondly, because we do not have an adequately identified target: a sufficiently detailed picture of the internal structure of the DNA molecule and of the role of its elements in producing particular mutations is not yet available.

Quantum electronics will, probably, supply the tool required for both sighting and target identification, for the electron microscope permits investigation of truly minute areas. An electron beam or a thin beam of quanta of electromagnetic radiation could be effectively focused on such a small area: this would be both sighting and a target iden-

* G. Thomson, 'The Foreseeable Future, Cambridge, 1955, p. 124.

154

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

155

tification technique because in such a case electronics would permit experimental investigation into the functions of the elements of a DNA molecule in producing particular mutations, solution of the mechanism of such mutations, and, eventually, programming of desired mutations by affecting suitable elements of the molecule.

What is involved here is not just quantum electronics in the narrow sense, but the entire set of radiations of varying particle compositions and energy levels, and generated by different sources. With quantum electronics, a highly concentrated beam can be focused on a single cell and subsequently on a chromosome. On the other hand, the relativistic effect could be used for greater accuracy of radiation time: the flow of time for a bombarding particle varies with its velocity, hence, its life-span and travel can be adjusted accordingly. It is probable that cancer therapy wilil make extensive use of relativistic particles.which will decompose at the right points without injuring healthy tissue.

It is too early to predict the specific uses of ``target'' radiation genetics or of ``target'' radiation therapy. What we are interested in, however, are not any specific applications but rather the overall import of the trend for the prognostication for the year 2000. This import is one of a conscious restructuring of the details of biopolymers. Where the function of quantum electronics is to bring order to an irregular and in that sense ``entropic'' plurality of radiation sources, that of quantum biochemistry, which utilises narrow beams of short-wave radiation and focused fluxes of varying particles of different levels of energy, is to reduce the irregularity of influences exerted on genetic changes.

The greatest entropy, irregularity and wantonness in such changes is observed in radiation-induced mutations. The closing years of the 20th century will see a reduction of entropic influences on life and a sympathetically greater role of controlled influences.

One of the subsequent essays of this book will discuss the subject of information and its accumulation and concentration as the key guideline to progress in every field of endeavour. At this stage, however, I want to make the following point about information.

Nature has gone a long way towards establishing statistical regularity in heredity. The genetic code is characterised by a high level of stability. A DNA molecule has a pretty good ``memory'' of the evolution of the species and a good knowledge of the elements which will be repeated in future organisms. On the other hand, the information relative to alterations of heredity is limited, no mutations of organic life forms are programmed. The evolution of organic life is not programmed except statistically: it consists in the selection from a great number of statistically representative random mutations of those changes calculated to increase the probability of survival. Some of the reasons for this are to be found in the irregularity, the randomness, the ``entropy'' of influences---including radiation---on living matter and on the genetic codes of organisms. It is a fundamental task of this century to bring order to, and to focus, radiation, and to reduce its entropy caused by the rising (levels of radiation in the environment. This task can be performed through the use of quantum electronics and similar approaches. It will suffice for our present purpose to indicate that the theory behind both a deeper insight into biopolymers and the further progress of electronics is provided by quantum physics.

The following conclusions can be drawn from the foregoing discussion of the effects of quantum electronics on molecular biology and of the non-classical characteristics of changes of the genetic code,

Noozones are to be found not only in a number of discrete quantities of matter, or in the radiation spectrum; they are also present in organic life. Classical science knew of transitions from physical and chemical laws to the strictly biological laws of ontogeny: it was precisely at that transition point that it provided the initial physico-- chemical conditions for ontogeny---the noozones of ontogeny, the conditions of irrigation and fertilisation which, along with the climatic conditions chosen for the various species, determined the probability of a particular ontogenetic evolution. Classical science also knew of the initial conditions of phylogeny, and selection conditions were combined in such a way as to produce new specific characteristics preprogrammed as the end result of the selection. In modern

156

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

157

molecular biology the transition from physical influence on a molecule to the molecular structure carrying the genetic code, i.e. alteration of the genetic code, is also becoming a noozone.

However, the possibility of controlled alteration of the genetic code derives from the non-classical nature of the influences exerted and from the non-classical characteristics of a living molecule.

There is still another link between quantum physics and biology, which cannot be represented as a series of experimental techniques and physical schemes taken over from physics to biology. This is an increased intellectual potential of science as a whole, particularly of biology, in consequence of the abrupt expansion---expansion of what could be described as the associative valencies of scientific' thinking, expansion triggered off by quantum mechanics. When a scientist is on the search for a mddd to represent a certain process he has a number of intellectual associations and he can arrange available concepts and facts according to one or another of his associations. When Carnot considered the limits of the perfectibility of :Steam machines and the transition of phlogiston from the furnace to the condenser, the natural matrix for the thermodynamic scheme was the mechanical model of liquid flowing from a vessel with a higher level of liquid into one with a lower level. When Faraday built the field concept, lines of forces were visualised as flexible tubes. Thus, a scientist selects possible associations from a certain number of available choices. The number of available associations multiplies greatly where a new concept or concepts come into play representing not only "intellectual advance" but also " intellectual self-penetration''.

And now a note on feedback between physics and biology.

Any reference to the results produced by the revolution in our concept of life and characteristic of the mid-20th century brings to the fore the effect produced on the biological science by experimental developments in physics and chemistry, by the classical and quantum physico-chemical concepts, and by the general intellectual advancement resulting from theoretical physics. There is, however, a cer-

tain shift in emphasis when reference is made to future prospects: it appears that the few remaining decades of this century will be marked by significantly increased feedback from the biological science to physics, chemistry and experimental and production techniques of the two latter disciplines.

This feedback may be realised both on the molecular and super-molecular levels. Developments on the molecular level will probably involve improvements in synthetic polymers, with biopolymers---macromolecules of living matter---as the end result sought. The latter offer a number of advantages, e.g. a homogeneous composition that has never been achieved in chemistry. Biopolymers consist of molecules of identical composition and structure. Synthetic polymers, such as synthetic rubber, plastics and fibres, include chains of varying lengths and different radical and atomic arrangements. It may be .suggested that the uses of synthetic materials will be extended in the next several decades by a closer approach to organic materials at molecular level. At the same time, molecular biology is expected to multiply synthetically produced materials having predetermined properties, as well as the range of such properties.

It is not impossible that feedback from biology to physics will permit, on the super-molecular level, to simulate a living organism's motor reactions in a power-operated device.

A purely mechanical or electromagnetic device has neither the efficiency nor the versatility of motions nearly as good as those of a muscle with its mechanical and chemical functions.

G. Thomson has drawn a parallel between the paw of a monkey picking oranges from a tree and a machine using electronics to accomplish the same task. The electronic machine would probably be hard to carry even by truck and would consume a lot of fuel. A monkey weighs 20 kg and eats 500 g of nuts a day.'^^1^^" If those are the specifications of a monkey's paw, how about man's hand, the hand that has "the high degree of perfection required to conjure into

G. Thomson, op. cit., p. 124.

158

PHILOSOPHY OF OPTIMISE

PART TWO. SCIENCE IN THE YEAR 2000

159

being the pictures of a Raphael, the statues of a Thorwaldsen, the music of a Paganini"?*

It is possible that the motions of muscles, inferior to a mechanism in precision of repetition, yet superior to it in the execution of the varied instructions generated by the brain, may become prototypes for industrial and experimental technologies. It is just as possible that devices imitating muscular function will be made up of synthetic macromolecules and operate in a series of mechanical and chemical processes, thereby simulating a muscle also in terms of molecular composition. However, at least in the next several decades, the most probable trend in the restructuring of power-operated devices will forego synthetic polymers simulating muscular mechanical and chemical functions, so that muscular functions will be performed by systems operating with crystal lattices.

However, we have been over that'ground. The feedback from the biological sciences to physics consists in that the functions of a living organism provide the matrix or the end goal of a cybernetic design. It goes without saying that mechanisms will be able to do, and are already doing, jobs beyond the ability of the human organism. That, however, does not detract from the end-goal value of the human organism for the simple reason that cybernetics will always have for its ultimate objective transformation of human labour.

functions of living organisms or biological behaviour as its main objective.

This discussion is concerned with the main challenge underlying the prognostication for the year 2000 which we have selected. A few words will be in order here about this challenge. It is one of programming both the self-- reproduction of mechan 'ms and, more importantly, their progressive evolution, improvements, and advance to new parameters. Such dynamic programming is a radical departure from biological phytogeny, an evolution governed by the statistical laws of selection. The phylogenetic laws governing technology, technological progress, and the changing technical standards of each new generation of machines are a different matter. Technological progress frees Man of the stranglehold of the statistical laws of natural selection. The programming of technological progress will find its highest expression in a succession of generations of machines capable of creating operating patterns superior to their own. We are 'going to see that this substitution of Man in his dynamic reconstructive function will be Man's apotheosis in his essentially human quality.

The evolution of cybernetics went on hand in hand with the study of the laws of organic life at the level of the molecule, cell, muscle and the nervous system. These laws were progressively generalised, a relationship was established between physiological and psychological concepts on the one hand and physical---and the even more abstract information---science concepts---on the other. Norbert Wiener's Cybernetics or Control and Communication in the Animal and the Machine, especially the introduction to that book, contains a vivid and illuminating discussion oi the early progress of cybernetics, an account that brings out the significance of biological problems and concepts for the genesis of the new science. Yet, even in its early stages and increasingly more so later, cybernetics gave evidence of its involvement with an active transformation of nature, with that "artificial adaptation" which Marx distinguished from natural, biological adaptation.

Living tissue, even at the molecular level, is characterised by a structure involving a very large number of elements. This feature of living tissue stems from its function

CYBERNETICS

The main trend in the transformation of the nature of Man's labour in the late 20th and early 21st centuries will be determined by quantum physics. Here we come to cybernetics, a science that has gone a long way beyond its historical anticedents---ancient mechanical imitations of animals and Man. Cybernetics has progressed beyond them not only in terms of mechanical complexity, or of the physical and technical principles utilised, but functionally as well. Cybernetics is not focused on imitating the biological

Frederick Engels, Dialectics of Nature, Moscow, 1973, p. 172.

160

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

161

of storing genetic information which underlies the ontogeny of living organisms and their self-reproduction, i.e. the retention and evolution of specific features in phyilogeny.

The structure of a DNA molecule determines that of an RNA molecule and, through the latter, the structure and behaviour of the cell and organic tissue. In general, however, this structure is subject to change only through the action of outside factors, such as radiation. It may, of course, undergo spontaneous mutation, which is not programmed in the genetic code either. A DNA molecule contains no feedback mechanism capable of causing chromosomal change, a restructuring of the genetic code and the choice of a different phylogenetic direction.

Let us now take a /look at the cybernetic mechanism, always keeping in mind that we are looking at a prospective situation, at tendencies which are there today, yet will! find their completion in the future. A computing device which has graduated to the control function involves the following operation. At every point in time, the device has the ability to calculate the results of a command which the device can address to itself, i.e. order its own restructuring, or to some extraneous receiver. The device can calculate the results of a choice of commands and select one of them as the optimal, i.e. conducive to the maximum or minimum values of certain controlled variables. A case in point is a chessplaying machine capable of taking its choice of several possible moves and of pre-calculating the consequences of each of the courses of action. That is feedback. It must be emphasised, however, that feedback of this type is not empirical as is generally the case with the behaviour of living organisms. When a choice made by a living organism is determined by a conditioned reflex, the empirical nature of such choice is obvious. Even where an animal chooses a path leading to a water hole, his prey, or a safe hiding place, its mind scans the empirical images stored in its memory.

Conversely, the results of computation or logic operations performed by the machine need not be stored in its memory. This is prognosticative information, a "genetic code" with non-empirical feedback.

Owing to the non-empirical nature of its feedback, a cybernetic device can receive a set of variable data which have never actually existed and, after a comparison with other potential sets of variables followed by selection of the original set, create a new, actually existing system that is not, however, based on actual precedents. Thus, the dynamic function---restructuring of the system and its behaviour, which is not programmed by the genetic code in the organic world---is determined in cybernetics ante factum as a part of prognosticative information connected through feedback with potential rather than actual `` factum''. The dynamic "genetic code" of artificial adaptation may involve mutations of a more or less far-reaching dimension. In computing and selecting the optimal parameters of a new machine, a cybernetic device can work changes in components, component arrangement, production process rate, etc., without changing the ideal cycle: but it also can order a switch to a different cycle. We shall soon have occasion to go back to this hierarchy of increasingly more far-reaching dynamic shifts in automatic production process optimisation.

There is an essential difference between the function of a single particular element of living matter in organic evolution and that of a particular element in a cybernetic device. An element of an organic structure---a single DNA, RNA or protein molecule---has as little effect on the destiny of organic evolution as a single molecule on the course of thermodynamic processes. What happens to an organic molecule, say, a structural change, may trigger off mutation; yet, for that mutation to affect phylogeny, the destiny of the species, the organic evolution, would require a mechanism of natural selection with its essentially macroscopic statistical laws. Artificial selection, for example, radiation selection, is a different proposition, with discrete mutations each having their own effect.

In cybernetic machines, the role of discrete individuals of the species is played by vacuum or crystal devices adapted to respond in a certain way to various signals. An incorrect response of a particular component device is a possibility which is either corrected or compensated for: cybernetic machines comprise special control and compen-

11-01545

162

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2006

163

sating devices. However, these devices operate by monitoring rather than by ignoring the behaviour of individual components of the machine.

The demarcation line between ignored and monitored components lies between "focal spot" genetics and genetics affecting a macroscopic "field of fire", both discussed in the preceding essay. Cybernetics, on the other hand, is concerned exclusively with "focal spot" events and with the monitoring of discrete events and individual behaviour. Random events in cybernetics, specifically those ensuring reliable control, do not result from discrete events being ignored.

Bradbury has a story about a journey into another geological epoch in a sort of time machine. While there, one of the members of the expedition happens to kill ,a small animal, thus changing the entire biological evolution and history of the Earth. The change is for the worse: upon return to the here and now, the travellers learn of the victory of a fascist candidate in a US presidential election. Clearly, this fantastic picture does not correspond to the laws of biological evolution: the latter depends entirely on macroscopic developments, the consequences of the actual fate of an individual being submerged in a sea of random entropy balanced only by the macroscopic ordering of existence. In cybernetic devices, however, an elementary process plays a far from negligible role to which Bradbury's concept is fully applicable. In a cybernetic machine looking for an optimal decision, an elementary process can affect the latter. It was mentioned earlier that cybernetic machines incorporate means to control and neutralise random elementary responses; yet such means operate by taking account of, rather than by macroscopically averaging out and ignoring individual elementary events. Cybernetics is based on the ordering of microscopic processes.

It should be remembered, however, that the ordering of microscopic processes is different from that of the macroscopic ordering of existence. The orderly behaviour of the statistical total of a molecule means that, on the average, all the molecules move in a well-ordered manner to form a moving body (or a motionless body in some frames of reference). There is no uncertainty as to the position or im-

petus of the macroscopic body; the entropy is indicative of the disorderliness of the molecules comprising the body.

On the other hand, where we deal with bodies which do not consist of a large number of particles, such as microscopic bodies, crystal lattices, molecules made up of a small number of atoms, atomic nuclei and, finally, elementary particles, the entropic absence of information is the consequence of an uncontrolled action of macroscopic bodies on particles rather than of ignoring some more finely fragmented particles. What is involved here is the uncertainty of a single individual event. This is not a case of limited ordering and, consequently, limited information, but rather one of reinterpretation of the meanings of the terms ordering and information in which a certain dispersion of values gives them an added dimension. This, however, will be discussed later.

Cybernetics has emerged and is capable of dealing with technological and scientific research problems precisely because science has the power to make vigorous inroads into the quantum world of orderly microscopic processes, to affect selected microscopic events, and to undertake direct investigation into the discrete nature of matter and radiation.

Generally, the chains of mathematical and logic operations effected in a binary cybernetic machine are similar to a popular game in which a Man's name is found by asking a finite number of questions answerable with either ``Yes'' or ``No''. An elementary location of a cybernetic device, which comprises thousands of such locations, goes through the process of answering ``Yes'' or ``No'' as it progressively brings the answer closer to the question addressed to the machine. The answer may be the results of computing a string of equations involving any range of problems, down to medical diagnosis, a train schedule for an entire railroad network, the parameters of a new cybernetic device, or the probabilities of various atomic reactions. The important point is that these elementary processes are paradoxes: from the classical standpoint they are impossible in principle.

It was precisely the statement of such paradoxical processes that launched the quantum theory of the electro-

11*

164

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

165

magnetic field. Planck's discovery of the discrete nature of electromagnetic wave radiation and absorption was followed by Einstein's interpretation of the quanta of the electromagnetic field from the nature of the photoelectric effect which was unaccountable in terms of classical physics. The photoelectric effect results from light ejecting electrons from the surface of a body, e.g. a metal plate. The energy of the knocked-out electron equals the energy supplied by the electromagnetic wave. It might seem that the farther electromagnetic waves travel, the lower the energy at each point of the receding wave since energy density falls as the wave front broadens. Thus, the energy of the electromagnetic wave at the point of the ejection of an electron should be the function of the distance between the plate and the light source. Actually, the energy of/an electron ejected from a metal surface does not depend on that distance. In the words of Cramers; it is as if a sailor dives from board a ship into the sea and the generated circular wave which will propagate from the spot of impact in every direction reaches another sailor bathing at the opposite end of the sea with enough energy to cast that other sailor from the sea and onto the deck of his ship. Paradoxical in classical terms, this phenomenon was proved obvious by Einstein's theory that light is a flux of quanta termed photons.

Other processes---similar to the photoelectric effect in that they are paradoxical from the classical point of view and natural in quantum physics---are becoming the principal technically applicable processes in quantum electronics and radiation genetics. It is precisely such processes that have enabled cybernetics to grow into the key factor for the transformation of the nature of human work.

The first generation of cybernetic machines utilised vacuum devices. If two electrodes are soldered into a glass tube which is then exhausted, electrical current will flow through the tube when one of the electrodes emits electrons which reach the other electrode. To make that possible, one of the electrodes must be heated. The heated filament of an incandescent lamp emits both quanta of light and electrons. The other electrode may be soldered into the wall of the valve and the valve itself connected so as to

make the filament a cathode and the second electrode an anode. In .this case, negative charges---electrons---will pass from the cathode to the anode making the valve a conductor.

By varying the design of, and recombining, such circuits, devices can be made that will pass current in one direction only, devices that pass current or, conversely, act as insulators when a voltage is applied to them, or devices that turn the current on or off in response to two identical or different signal pulses. In general terms, such vacuum devices operate, as it were, by answering ``Yes'' or ``No'' (by dosing or breaking the circuit) in response to two .signal pulses (which corresponds to the conjunction ``and'' in a question) or in response to one of the signal pulses (corresponding to the conjunction ``or''). This kind of response is analogous to an affirmative or negative reply to a question indicative of the conditions expressed by a logical operation. An important factor here is the speed with which a reply is provided. In biological evolution, the environment supplies an affirmative or a negative answer to a question asked by organisms which have undergone a mutational change. It gives either an affirmative answer (the mutational change is made permanent by selection), or a negative answer (the mutational change is rejected). The answer requires a mass experiment involving a long line of generations, and may take thousands of years. The analogy suggests a macroscopic device designed as a lock which opens (affirmative answer) or stays locked (negative answer), or a device in which the core is either attracted by an electric magnet or not. In both cases, however, if these devices were to be operative their macroscopic nature would require much greater energy levels and longer periods of time for each elementary process. The macroscopic regularity of processes implies the uniform action of particles assembled in enormous complexes. These complexes---levers, shafts, gears, electromagnetic cores, etc.--- go through their motions in the macroscopic spatial scales of millimetres and centimetres and in the macroscopic time scale of seconds. As the result, even with very high energy levels, the reaction times of macroscopic mechanisms cannot be high.

166

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

167

The underlying structure of electronics is a multitude of microscopically ordered events producing macroscopic effects. Let us illustrate this by electronic devices referred to above. The initial process is electron emission, an essentially microscopic process. Its end result is the closing or breaking of an electric circuit which must have enough power to cause the displacement of macroscopic bodies. Microscopic events involving discrete particles of matter and radiation occur on an infinitesimal time and space scale, in which thousandth and millionth fractions of a centimetre and a second are significant. Events providing the point of departure for such macroscopic processes as a change in the operating conditions of dozens of giant power plants occur in just such time and space intervals.

Signals controlling resistance changes in electronic devices may be light signals affecting cold cathodes. The photo-electric effect---the effect paradoxical in the classical frame of reference which suggested the existence of light quanta---is the underlying basis of photocells in which light liberates from the anode electrons that carry electric current in a vacuum tube. In light signal applications, the range of signals provided as input to electronic devices includes not only all visual images ranging from objects under the lens of a microscope to a sky of stars observed through a telescope: a photocell is activated also in response to electromagnetic oscillations beyond the visible spectrum. These, as was mentioned earlier, permit connection of cybernetic components by means peculiar to "focal spot" quantum electronics, laser beams including ultraviolet and other rays in a still shorter wavelength.

Electronic valves have been followed by other devices whose emergence and development was the result of the rapid progress in the quantum theory of solids. Classical physics of solids defines a solid as a conglomerate of particles which are very small solids behaving in the same way as the parent macroscopic body. Quantum physics of solids describes processes unknown to the macroscopic picture of the world. It is precisely these processes, paradoxical as they are in the classical frame of reference, that are utilised in cybernetic devices. The quantum theory permits a much more accurate picture and a much better control of

the specific processes involved in the changing conduction of crystals. Next comes a range of semiconductor materials which depend for their conductivity on their composition and external factors, e.g. absorption of light. For this reason, semiconductor systems can perform all operations effected by vacuum systems, consisting in the reception and processing of data contained in input signals.

Semiconductor devices have grown from their original application in radio engineering to become the key elements of cybernetic devices. They require a fraction of the energy consumed by electronic valves in which a major portion of the energy is used to heat the electron emitter--- the cathode. A still more important consideration is the fact that electronic processes occurring in a crystal have a much greater velocity than a vacuum. Used as the elementary components of cybernetic devices, semiconductor units have increased the number of operations per second from thousands to hundreds of thousands and millions.

Other applications of the quantum theory in physics of solids, apart from semiconductors, have contributed to the increased capabilities of cybernetic machines. These utilise processes that, from the classical viewpoint, would seem paradoxical. A phenomenological description of the processes employed in cybernetic devices could forego an in-depth account of the non-classical theory. However, a characteristic feature of cybernetics is a virtually uninterrupted advance toward new physical principles, a systematic construction of instruments of increasingly radical and fundamental novelty, and a consequent robotisation of increasingly complex operational steps. This progressive development which makes possible both a non-zero accelerated rate of scientific and technological progress and a non-zero growth rate of its acceleration requires that science go beyond a phenomenological description of such processes.

For the purposes of this book it is not necessary to go on with this account of the elementary devices comprising cybernetic machines, or to describe their circuitry, the block diagrams used in automatic computation, data reception and processing, store units and controls. This book is only concerned with two aspects of the problem: the re-

168

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

169

lation of late 20th century progress in science and technology to non-classical science and the effects of that progress. The alll too brief accounts of the physical and technical principles embodied in thermionic vacuum tubes, semiconductor devices and cryotrons provide an illustration of the dependence of cybernetics on quantum mechanics. And now for the economic effect of cybernetics---not an enumeration or a systematic account of the benefits of cybernetics for industry, transport, communications, etc., but rather its total effect which can be indicated without itemising the particular technological applications of cybernetic machines.

This total effect involves the automation of still transient production processes and a higher level of production dynamism. When photocells first came inte fairly wide use in the 1930s, people were impressed by the demonstration of one which used the weak light of a distant star to switch on lighting and power at a large international exhibition. In those days, the promise of automation was related, in people's minds, to the capability to switch on and off high-tension circuits in response to light signals or to a change in their intensity. Today, we are concerned with a different and much more radical transformation of production, culture and scientific experimentation. Signals and underlying electronic processes launch long trains of other electronic processes in which each subsequent process is related to its predecessor as a step in a logical or mathematical deduction. These long series are translated into such practical applications as computation, solutions of equations, new optimal designs, production processes, flows of goods, optimal patterns of industrial distribution, etc. The first generation of cybernetic machines could have provided the first point of departure for a prognostication of the nature of work different from that of the first half of this century. Cybernetic machines do not just eliminate human involvement in operations consisting in switching an electromagnetic device on or off---this could be handled even by the early photocell relays of the 1930s and 1940s. Even the first generation of cybernetic machines had a much greater capacity, that of taking over Man's dynamic functions. We shall explain this.

Production, like Nature, offers a series of processes each of which can be seen as a repetition of the same constant action. Superimposed over the latter is another process consisting in altering the original action which ceases to be constant. Let us suppose that the repetitive constant process is represented by an inertial motion, uniform and rectilinear. The repeated aspect of the motion will be the passage of each of the equal spaces making up the distance travelled in equal time intervals. The constant in this case is velocity. Let us now suppose that another process, a change in velocity or acceleration, is superimposed over the motion of the body. This latter process is dynamic in relation to inertial motion, yet it comprises something that may remain constant---the rate of acceleration. Now a rate of acceleration that is subject to change represents a higher level of process dynamism. Where the processes are continuous they will have corresponding constants represented by a higher level of derivatives from the distance covered in an interval of time: first derivative (velocity), second derivative (acceleration), third derivative (rate of acceleration growth), etc.

The field of production offers an example of repeated identical operational steps followed by the dynamic process of transition from one set of steps to another involving redesigned machinery and technological restructuring but no change in the physical or chemical flow diagram, then a change in the flow diagram---a process discussed at the beginning of this book. It may be suggested that cybernetics will govern dynamic processes of a progressively higher level.

The photocells of the 1930s and 1940s controlled automatic transition from one operational step to another, yet they produced no change in the set of steps comprising the production process as a whole. Now the dynamic function consists in transition to a new set of operational steps, a novel production process utilising new machine designs. This function can be largely automated through the use of cybernetic machines which compute design and process parameters of increasing refinement under the direction of control programmes. Hence the transition to a higher level of production process dynamism and the concentration of

170

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

171

human endeavour on progressively more dynamic functions. A cybernetic machine controlling an unchanging stationary process has a relatively simple set of operations to perform. However, where the process is still subject to change a more sophisticated data trasmission circuitry is required. If a cybernetic device is to control the load on industrial equipment, flow of goods, redistribution of electric power in a network etc., it must utilise very long logico-mathematical sequences. For a load redistribution function to be continuous, the elementary operations must be performed at lightning speed. Accordingly, high-speed devices permit to bring automation to increasingly dynamic processes, to go from constant process control to continuous process optimisation to suit changing conditions. The next step in optimisation is one from the load redistribution with existing machines to an advance to machines of greater accuracy and refinement---automatic' design and development of new machines. In this case, the function of chains of processes occurring in the cybernetic components is to transmit data representative of the sequential states and effect of systems comprising a myriad of components and to collate a swelling avalanche of alternatives. A good analogy would be a chess game played on a checker board with an enormous number of squares and pieces and continuously changing rules, the game being non-stop, continuous, with no time to think over the moves.

The above makes clear the potential effect to be obtained by going in cybernetic technology from vacuum devices to semiconductors. Although this effect cannot yet be seen in clear detail, it can be stated in this general language: modern cybernetic technology permits the automation not only of established processes and of the dynamic processes of machine load redistribution, but also of the dynamic processes involved in the design of new machines and the restructuring of production processes.

It may be suggested that the next decade will witness the completion of the introduction of automation in established processes and of automatic control over them. Random disturbances of an established rhythm and sequence of operational steps will be eliminated by automatic control. The principal function of cybernetic devices, however,

should be dynamic process control, load redistribution coming under this heading first. In that case, control has for its purpose the solution of problems along this line: What is the pattern of load changes that would give maximum efficiency in meeting the need for the operation of various machines? Let us take by way of illustration several power plants tied into a single power grid by high-tension lines. To keep the plant boilers operating at regular efficiency levels and to eliminate occasional trouble spots by automatic control, thermoelectric, photoelectric and other conventional devices would be adequate. Now, if loads are to be redistributed among power plants and individual units to take care of changing power requirements or in response to other variables, mathematical problems must be solved at high speeds and optimal decisions translated into action by automatic control. The same is true of gas and water supply, heating utilities, carriage of goods and in an increasingly greater degree, of fuel and raw materials extraction, continuous industrial processes, etc. It may be suggested that within the next decade or two every key economic activity will adopt dynamic techniques represented by cybernetic load control devices.

This, however, is but the first phase of the overall task facing cybernetics in production applications. Clearly, load redistribution brings dynamism to selected production processes and individual industrial enterprises, but the production field as a whole remains an established process, nor are its overall parameters, including the key factor of the productivity of social labour, affected. What is missing is a generally irreversible dynamic evolution affecting the entire production field---technological and techno-economic progress.

This kind of evolution is made possible by the adoption of new designs and new production processes. Can cybernetics assume the responsibility for providing a solution to this problem? To answer this question, some possible causes of misunderstanding must first be eliminated. Specifically, this does not, by any means, refer to an actual physical development of the machine of John von Neumann, a cybernetic machine capable of self-reproduction in a series of machines having the same parameters. Nor

172

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

173

are we concerned with a cybernetic designer who would replace a human designer. What we are interested in, however, is a human designer who will use a cybernetic machine to arrive at specific parameters corresponding to each new alternative of the new design being worked on at a very high speed, virtually instantaneously in terms of the actual time of design work, to work out the effect of each alternative, compare notes and select the optimal alternative. The number of human designers who will employ the services of cybernetic machines is not essential. What is essential is that the rate of design work and of the development of novel production processes will be greatly increased. Technological progress will become continuous even if observed in individual production areas and over relatively short periods of time.

,

With reference to production and the evolution of technological and techno-economic parameters, the term `` continuity'' has a specific meaning. Variables averaged out statistically, for example, for the entire field of production, are subject to change. Subject to this proviso, there were periods in the first half of this century in which the technical level rose at a continuous rate. Today, cybernetics applied in engineering design offices, factory laboratories and research and development centres ensures continued technical progress both in terms of overall production and of individual industries.

The revolutionary turning point in the history of technological progress is the advance from the continued growth of technology to a continued acceleration of the rate of that growth, a subject dealt with in the essay "Why the Year 2000?". The underlying structure of this acceleration is the emergence of more and more ideal physical and chemical schemes, an approach to which is, essentially, technological progress. What is the governing factor behind the emergence of new ideal schemes, i.e. behind scientific advances in areas directly related to applied problems? The rate of scientific progress in these areas depends on feedback, on the application of research findings to production, on pure research which does not produce any tangible applied effect, on efficiency in the sharing of scientific information, and, in a very large measure, on the speed

with which theoretical deductions are related to experimentation. Cybernetics adds an important dimension to all of these research-accelerative factors. We propose to deal only with the last one, the speed with which theoretical deductions are verified by experiment. One of the effects of the modern trend toward the use of mathematics in virtually every science is that the interval between a theoretical concept and a practical conclusion that could lend itself to experimental verification generally involves a long series of computations which require months, sometimes years, of human investment. Machines, on the other hand, do the mathematics in a matter of minutes. Thus, computerisation affords one of the grounds warranting the belief that the closing decades of the 20th century will be marked by a virtually unceasing stream of new physical and chemical schemes---the teleological schemes of technological progress.

The rate of scientific progress determines the rate of acceleration rather than the speed of technological progress. Technological progress may be characterised by a certain speed of its own and be virtually unbroken given some immutable ideal schemes as the end result of technical creativity and the endeavour of inventive genius and technological development. Where ideal schemes undergo continuous change, technological progress has continuous acceleration.

Is it possible to indicate a greater rate of dynamism in production and forecast a speeding up of the accelerative rate of progress? The problem has been touched upon earlier, and we will not go so far as to say ``Yes'' right now. Such a rate of technical advancement will be possible where scientific creativity approaches its ideal schemes at an increasingly greater speed, i.e. with acceleration, and the ideals of science are flexible. Just what are "ideals of science"? Have they become flexible? or will they be so in the future? We shall have to deal with these questions shortly. Right now, suffice it to say that speculation on the ultimate ideals of scientific knowledge will not become the function of a cybernetic machine in the foreseeable future although it will increasingly draw for its validity on computations and observations afforded by electronic ma-

174

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE JN THE YEAR 2000

175

chines. The picture of a cybernetic robot speculating on the fundamental principles and ideals of science remains a fiction of imagination, at least for the 21st century. We shall have occasion to return to these principles and ideals later.

It will be seen from the above that elimination of the human factor in a particular function or functions is the least significant effect of cybernetics on the nature of human labour. More essential is the qualitative change in human endeavour, the explosive growth of the field of its application, the multiplication of the forces of nature purposefully utilised by Man. The transformation of labour consists in Man's concentration on increasingly more creative functions. Change in the operating and load parameters of machinery, technological change, change in ideal schemes, i.e. the purposive functions of technological progress, change in the principles of science---'•the whole hierarchic range of dynamic processes---represent the rungs of the ladder which Man is progressively ascending. Human endeavour has at all times been characterised by progressively more dynamic components: changes in technology, science and fundamental principles. These, however, were no more than discrete turns in the path of progress. The rungs in the ladder of progress are represented by those points in time when an increasingly dynamic function emerges as a continuous process. The prognostication for the year 2000 reveals the emergence of a continuous acceleration of technological progress backed up by the continuous evolution of new ideal schemes of mechanical, physical and chemical cycles.

It is not hard to see that a higher dynamic function would not be practicable in the absence of a lower function. Scientific research will promote technological progress provided we have engineering design offices, industrial laboratories and development centres capable of absorbing progress-accelerative ideal schemes and having the technical facilities to put such schemes to use, handle the mathematics of their various engineering embodiments, compare the latter and select the optimal solutions. Clearly, the mathematics, comparison and selection must be done at the high speed made possible by electronic computers.

On the other hand, engineering design offices, laboratories and development centres are able to perform their dynamic function only where the industry can translate new designs into operable physical machinery. An element required to realise a scientific discovery, introduce a new piece of equipment, machine tool or process will, in all cases, be cybernetic machines capable of translating ideal schemes into actual parameters and, subsequently, into performance characteristics, of comparing them and finding optimal solutions. Cybernetics is precisely that tool which brings unity to science, technology and industrial operation by eliminating time intervals between them, the end result being a package whose individual components are not realisable separately.

At this point, a comment would be in order on the relationship between cybernetics and non-classical physics or rather what might be described as the spirit or style of nonclassical science, which is directly related to the changing nature of scientific thinking in the 20th century. Classical science relies on the mechanical interaction of elements of being for an explanation of macroscopic and cosmic processes. The function of the elements of being in the 18th and 19th centuries was provided by molecules and atoms; in the late 19th century elementary particles which proved to be unaccountable in terms of mechanical interaction of any smaller sub-particles, came on the scene. In our day, at the close of the 20th century, it has become apparent that the behaviour of elementary particles must be related to galactic behaviour, that processes on the order of billions of light years provide an insight into those processes which occur on a scale of 10~^^13^^ cm and 10~^^24^^ sec., that astrophysics cannot be properly divorced from elementary particle physics, that elementary transmutation would be meaningless without reference to the macroscopic picture and world lines. These points were made earlier and will be dealt with occasionally in later portions of this book. What we are concerned with at this point is a fresh concept of the infinitely (or finitely) great and the infinitely (or finitely) small, a fresh perception of the relationship between the Whole and its local elements ("outside of here and now" and "here and now"), a fresh approach to the

176

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

177

integral and the differential and, by contrast with purely physical constructions, not so much the content of physical and astrophysical concepts as changes in the subject of cognition of the world, in Man's reason per se, in the style of cognition. It is precisely these problems---changes produced in the subject of cognition by the object of cognition and the content of that object---that stand closest to the theme of this book, the philosophy of optimism. Laplace's " intellectual self-penetration" now accompanies every or nearly every "intellectual advance": "intellectual self-- penetration" represents a major effect exerted by science on logic and an important emotional accompaniment to Man's thinking.

The theory of relativity seen as a source of the philosophy of optimism, this concept of the theory being one of the central ideas of this book, means giving a relativistic dimension to the localised element of classical science--- an individual moving body, motion unrelated to bodies of reference. What we have here is a basis for an integral concept of the world and motion, for relating motion to an infinite (or a finite but relatively to local elements an infinite) series of bodies. Atomic and nuclear physics is focused on the stocks of energy that came into being with the formation of nuclei and on the energy-releasing processes which underlie stellar evolution. This integral concept of the world and the creation of a megascience, the science of an infinite world and of its infinitesimal particles, represents one of the major effects of the theory of relativity on 20th century civilisation.

This integral approach is peculiar not only to our concept of, but also to the way we act on, the world. Modern production has a different effect on the world from that exerted by classical science-oriented production. The latter type of production resembled the scientific contemporary concept of the world; its effect was the static result of a plurality of individual facts, such as the phylogenetic effect of millions of individual tragedies that terminated the existence of individual living beings. The mass result in the production field, both economic and ecological, was the sum total of individual technological or economic acts statistically averaged out or summarised according to a

particular macroscopic matrix. Today, a selected technological or economic act is a concentration of its eventual results: transition to fast reactors, for instance, restructures production resources, their location and resultant ecology. Accordingly, modern production can no longer rely on a statistical play of natural forces: it is no longer thinkable without estimates embracing entire countries and many decades---a fundamental aspect which is mentioned here in passing to be dealt with in detail later. Accordingly, we are witnessing the emergence of what can be described as "nemospheric thinking", a perception of the ecological effects of scientific and economic actions, the two varieties now being indivisible, on a scale of planets and ages.

Non-classical science both formulates integral problems for itself and for production and finds ways to deal with them. The problems of choice and of the optimum, estimation of both what is and what ought to be, consist in large measures in finding and comparing integral solutions. And that is precisely what the early endeavours that gave rise to variational calculation and the problems of maximum v. minimum that made such an important contribution to the development and application of the principle of least action, were all about. Non-classical science which relates local events and global and cosmic process as it gains knowledge of, and transforms, the world on an unprecedented scale sets up the task of finding solution to fundamental problems and of computation generally as a condition of the application of its principles. In that sense, computer technology and cybernetics are fields of non-- classical endeavour not only in that they rely on the use in computers of non-classical processes which are paradoxical in terms of classical physics. As important components of 20th century civilisation, as transformers of Man's thinking, of the subject of cognition and agent of volition, cybernetics and the mathematisation of knowledge and production find fresh sources for advancement in non-- classical science.

12-01545

178

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

179

``KNOW-HOW" AND "KNOW WHERE"

native. No such feedback is available in the biological genetic code or in the genetic information written in a DNA or a RNA molecule. In these cases, an organism and a species evolve in the absence of a dynamic advance modeil, nor does the molecule compare the various evolutionary alternatives. With the second type of information, the brain or a cybernetic device produce, practically at the same time, a choice of possible alternatives which afford a prevision of an evolutionary course, permitting to make an intelligent selection.

This type of information represented by the statement in the Subjunctive Mood ("given certain conditions now, certain specified events would occur in the future")---this type of prognosticative information is characteristic of any productive effort, technological development, or productive activity. In other words, it is characteristic of any purposeful endeavour and, as Karl Marx said, represents the distinction between the worst architect and the best bee which is superior to the former in the architectural precision of its honeycombs. The human brain projects images of the end result of a predetermined sequence of productive acts enabling Man to select that sequence which will afford the optimal result. By the same token, a cybernetic device can be made which receives similar prognosticative information and selects the optimal alternatives, thus imitating the human function.

Let us discuss that function in terms of negentropy. Productive work promotes negentropy. It is found in a higher degree in macroscopic structures referred to above---fibres are arranged in a certain pattern in fabric as opposed to cotton; owing to regular heat distribution the temperature is higher inside a boiler than in a condenser, indoors than outdoors; metal is distributed in a regular pattern in alloys and finished products, etc. The higher negentropy of these examples is the result of increased entropy. However, it is not increased entropy that characterises productive work. In a closed system the degree of negetropy decreases as that of entropy increases. Production, however, is not a closed system: production means an increase of negentropy at the cost of its decrease, i.e. at the cost of increased entropy in a broader system.

12*

We would like to start this essay with a reminder of entropy and negentropy concepts discussed in "Optimism, Being, Motion" in Part One, where mention was made that these two concepts can be generalised and extended from molecular motion and temperature gradients to other irregular events---entropy, and to a regularisation of the same---negentropy.

In the middle of this century, these concepts were applied with great advantage to communication theory. A signal, i.e. a set of ordered microscopic processes such as sound or electromagnetic wave modulation, is negentropic: on the opposite side of the iscale is noise, a set of entropic irregular events, which works against, and interferes with, the former. Although a communication line transmits energy and pulses, that cannot be said to be its function: transmission of energy is effected by mechanical power transmission systems and high-tension lines. Telephone wires and radio waves, for their part, are designed to carry information, and the less the energy transmitted in the process, the better. Communication of information does not consist exclusively in energy transmission, although it is inseparable from the latter. Human speech does not consist in the transmission of the energy of air vibrations, or making noises, at least not always. In much the same way broadcasting involves more than vibrations of ether.

There is no need to define information: it has been shown that the notion is general enough to include cybernetic devices and even DNA and RNA structures carrying information about the hereditary traits of living organisms.

In the discussion of cybernetics, a distinction was made between two types of information and, accordingly, two types of negentropy. In the case of the first type, the " genetic code" (the biological term is employed here in a generalised sense and used in quotes accordingly) does not involve long chains of elementary processes that would permit to foresee the goal for which a particular alternative of the future evolution is headed and to use feedback in order to make an advance selection of the optimal alter-

180

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

181

Negentropic processes represented by today's avenues of technological progress involve prognosticative alternatives, optimisation, the choice of an optimal alternative.

The nuclear power technology raises temperature gradients to levels beyond the reach of classical power engineering. Lasers convert the scattered energy of spontaneous light bulb radiation into the concentrated energy of induced coherent radiation with enormous density levels. Radiation genetics makes a transition from a scattered irregular multiplicity of spontaneous mutations to an ordered pattern, hence, to a controllable chain of artificially induced mutations. Finally, cybernetics represents a higher stage of ordered regularity: it operates by comparing very long chains of ordered events to select the optimal alternative end result.

,

The above negentropic processes bring order to both the sequence of physical events and the sequence of changes in such ordering activity. For this reason, genetic information in technology includes both such statements as "By passing current through the available conductor the latter can be heated and made to emit light. . ." and such statements as "The light emitted by the passage of current will vary with the conductor metal in the followingmanner. ..". Statements of that kind permit to go from one metal in an incandescent lamp to a different variety of metal, compare results and find new, more efficient alternatives. What has been termed "genetic information" includes even such statements as "A light flux depends for its effectiveness on transition to a given fundamentally new light induction schemes in the following way...". With information of this type it is possible to prognosticate, i.e. to identify the effect, and plan, i.e. to identify the optimal alternative from among the forecast series, for transition, say, from incandescent lamps to gas-filled lamps.

Where there is a growth rate both in the existing level of industrial technology (achieved by approaching the ideal scheme) and in the acceleration of technological progress (achieved by actual adoption of such new physical scheme), the consequence is an increase in the amount of information required to make a more dynamic development possible. This is the information which ensures the choice of

the optimal end result of production cycles: a system of controls and regulators supplies data on variations in speed, voltage, pressure, temperature, composition of raw materials, end product characteristics, etc. Also, the process control system contains data on product standards and specifications which must be met. The controls and regulators receive signals warning of failure of regular operating conditions and of any instances of actual specifications deviating from predetermined standards. Signals of this type have the effect of automatically matching the actual parameters closer with the standard. However, this information does not indicate the consequences of changed operating conditions, nor does it pinpoint the optimal alternative changes. The optimum in this case is a constant set of operating conditions and constant parameters. The mechanism of readjustment is used to bring the actual parameters in line with the standard because they conform to the established process specifications, rather than because they are better. The information involved here is static.

Dynamic information, involving the results of altered operating conditions and of production process changes, comes into play to salve extremal problems and identify optimal alternatives. Dynamic production characterised by a virtually continuous rate of change in process parameters makes dynamic information a must at every stage of operation. A manufacturing plant faces a double task---- uninterrupted production plus continuous accumulation of information on optimal ways to achieve a higher level of efficiency. Clearly, a greater amount of such information will be supplied by engineering design offices, industrial laboratories and R & D centres.

The production of dynamic information, i.e. of progress, is becoming an increasingly important component of industry: by the year 2000 it may be on a par with such key operations as power generation, transportation, machine building, etc. Just what goes into dynamic information?

The components of dynamic information, as was indicated earlier, include new design and process parameters. computed results of, and sequence of transition to, new technologies and ideal schemes, and identification of op-

182

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

183

timal structures, production processes and the most efficient ways to make them operational.

This information includes know-how, first and foremost. The term is generally used to designate knowledge of the conditions and the most efficient ways of introducing a new manufacturing technique. Technological know-how is accumulated through a (long series of experiments, tests, pilot trials, data from initial operation, etc. The term could, however, be used in a broader sense---"we know how to achieve higher efficiency", thus including new design parameters and the technological approach.

Then there is a second stream of dynamic information. Increased efficiency results in greater output, increased reproduction, and utilisation of new raw materials and sources of power supply. The same consequences/are produced by technological and, in a still greater degree, scientific progress and accelerated technological development. Expanded reproduction leads to greater use of raw materials. This is possible only subject to faster accumulation of information on new sources of power, industrial raw materials and food. The big question is where to go for more coal, oil, gas, uranium, thorium, iron, raw chemicals, etc. To coin a new term by analogy with ``know-how'', this information may be called "know where". We need it greatly: today's increasing efficiency and population growth rates require accelerated utilisation of natural resources for purposes of production. In that sense, one of the attributes of the atomic age is resource depletion, yet it is substantially relative depletion, i.e. the need to utilise less accessible, and not infrequently less concentrated, sources of valuable materials. The latter generally involve greater power requirement per unit for their practical utilisation, which is compensated by lower cOiSts. The next few decades face another problem in relative depletion---exploitation of less known deposits. Accordingly, resource depletion is a problem in informaiton, viz., information on new resources. As the result, information of the "know where" type is already on its way to becoming a major informational field, both in scale and investment. Like ``know-how'', "know where" information---geological, soil, hydrological and geographical studies and the growth of corresponding

disciplines---is becoming a field of productive endeavour comparable to the principal economic branches. Thus, in the same manner as "know how", it embodies a part of divided homogeneous labour translated into material values.

Let us try to amplify and particularise these brief remarks on information in the atomic age. To start with, a few words on the interaction between ``know-how'' and "know where", a highly complex phenomenon which sometimes includes feedback. Resource information, "know where", may provide an effective argument for a changed technical policy, a changed technology, and a corresponding search for a fresh ``know-how''. In most cases, however, ``know-how'' is an independent variable.

The close relationship between the concepts of information and negentropy is clearly reflected in both `` knowhow'' and "know where". In both cases, information, even in its conventional acceptation, is concerend with negentropy, i.e. some macroscopic ordering. Nuclear energy, to take one specific example, is by the very nature of its origin and physical characteristics the energy of nucleonic bonds in an atomic nucleus. The existence of atomic nuclei, the combination of fundamental particles in more complex structures is a case of negentropy, a certain regular pattern of existence. Nuclear fission is also a negentropic process and one which calls to life intermediate forms of negentropy of which temperature gradients are the most significant.

Any energy system operates with a certain reserve of negentropy. In the final analysis, classical energy science, too, involves the temperature gradient between the Sun and the Earth. Non-classical energy science involves the energy gradient between energy levels in different nuclei, i.e. differences in "density ratios", in the unit energy of nuclear bonds. These gradients and forms of negentropy came into being with the emergence of the various elements of the Periodic Table.

At a certain stage, information on nuclear reactions included data in respect of the conversion of thorium into uranium-233, thus making possible thorium power breeders. This expansion of ``know-how'' led to interest in an estimate of available thorium reserves and in narrowing

184

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

185

down that estimate. Thorium proved to be abundant, thus stimulating further research and construction of more experimental thorium breeders. The practicability of such breeders broadens the range of future thorium prospecting programmes and largely meets the challenge of uranium depletion and, for many years to come, solves the entire problem of nuclear fuel resource depletion.

Thermonuclear reactions would work an even more farreaching revolution in the nuclear source picture. Because thermonuclear fusion requires deiterium, "know where" is no problem: deiterium is found in water in a fairly regular proportion---about one part per seven thousand or 0.014 per cent.

Today, or more likely in the few decades ahead, absolute investment in fuel resource exploration for thp benefit of ``classical-type'' power plants may be forced significantly down by the lower cost of atomic -power. For some time to come, classical-type power plants utilising rich, readily accessible fuel resources will be able to keep up successful competition against the atomic power generating industry. However, in areas where preliminary exploration is required new power plants of the conventional variety will, probably, prove economically unprofitable.

This will not reduce the flow of "know where", yet the direction of that flow will be altered. Atomic power generation will have the effect of expanding the search for raw materials. Clearly, if a particular fuel can be successfully replaced by a different variety, there may be no need to explore little known deposits of the former. The present development of the chemical science ideally holds the promise that anything may be produced from anything, or at least we will have the knowledge how to do that. It will be possible to choose---and this is already being done right now---alternatives involving the least cost from among a growing multitude of available approaches. These leastcost alternatives will, probably, utilise every or virtually every element of the Periodic Table as initial raw materials.

Many poor deposits will be put to use. It has been indicated that relative resource depletion is a power problem. Utilisation of poor deposits involves greater power

consumption per unit of extracted raw materials. Geological exploration for most minerals, exploration that covers all geographical areas, alters the underlying style of the "know where" type of information by bringing it closer to fundamental natural sciences.

Here we come up against one of the most important features of the science of the late 20th century. The power generating industry of the atomic age involves processes of a nuclear scale. Quantum electronics involves frequencies at which the explorer is confronted by minimal time and space intervals. Cybernetics is still a long way from the quantum scale, yet it already involves processes taking millionths of a second, with billionths of a second being the outlook for the future. As the explorer penetrates into progressively smaller time and space units, to identify the negentropy of smaller and smaller time and space cells, he gets closer to problems that then appear to be fundamental.

There is another aspect to this problem. The macroscopic laws which determine the distribution of the elements of the Mendeleyev Periodic Table in the Earth's crust are related to the laws of space chemistry, on the one hand, and to the laws of the microcosm, on the other. A detailed survey of mineral resources, including rare elements, links the points of individual deposits into lines, strips and, finally, regions, to give a geochemical picture of mineral distribution which is closely tied in with the genesis of minerals. The genesis of molecules and crystal lattices, for its part, brings the explorer to the fundamental problems of existence.

The food supply problem is one of the principal challenges of the late 20th century. It might be claimed to be the No. 1 problem, if that were not true of the problems of power, technology, communications and a host of others. We take this opportunity to insist (this will be discussed in detail later) on the futility of assigning any hierarchical ranks to the individual components of the essentially total transformation implied in any reference to the atomic age. Specifically, the ties between atomic power generation and the food problem must be emphasised. The food problem involves the challenge of power supply primarily because

186

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

187

fertiliser production requires a lot of power. Land irrigation is another power-consuming undertaking. Actually, fresh water supply and delivery is also a matter of adequate power supply. An effectively increased fertiliser production in the ilate 20th century will lead to at least the doubling of crop harvests through the use of artificial fertilisers, farmland expansion through irrigation, and adequate fresh water supply to population centres remote from rivers.

Let us return to the problem of the relative depletion of natural resources in general, and, more particularly, to the problem of utilising les,s concentrated deposits of a particular type of raw material, poorer soils, or leaner energy sources. Any one of these operations---extraction of leaner ores, exploitation of less concentrated or, deeper coal seams, farming on poorer lands, the construction of hydroelectric power plants to utilise lower water gradients, and other instances of relative resource depletion---involves greater capital investment and operating costs. This tendency will be compensated for, and sometimes eliminated altogether, by technological progress and land reclamation. However, there is a condition---one that is closely related to the generation of information and which requires special attention. This is the cost of ``know-how'' and of "know where''.

``Know where" is not critically important where soils and hydraulic power resources are concerned. It is, however, more important where we deal with mineral ores, and still more so with coal, oil and gas. It may be suggested that up to the year 2000 and possibly until a later date, cost reduction in ``know-how'' will be paralleled by increased unit cost of "know where". The reason for this, in the final analysis, is the fundamental! difference of the two kinds of information.

Information on the performance characteristics, operating steps and conditions of a particular new machine is always based on fairly accurately known initial conditions and a well defined programme. Given the objective of the production process and a particular range of potential raw materials, we begin to sift a range of potential approaches for the most effective one, an available selection of struc-

tural designs for the optimal, and a set of ailterntaive processes for the most suitable one. This is information on the future, prognosticative information on an optimal set of operating conditions, on the best structure and operational steps. The validity of this information must be demonstrated through experimentation, testing, and operational trials. In all cases, however, ``know-how'' follows this pattern: "Given certain performance characteristics, the instrument, machine tool, installation, shop or plant will operate in a particular way." This type of approach may be realised in a cybernetic machine which, given certain initial data, will process them, calculate results, compare them and identify the optimal alternative. Thus, ``know-how'' may derive from non-empirical feedback operated to produce answers on short order.

With "know where", the situation is different. This information offers neither a data processing programme nor any accurate set of initial conditions. The distribution of valuable minerals in the Earth's crust is the outcome of geological and geochemical evolution of which we know very little. We have no knowledge of the initial condition of the Earth, or of the present geochemical and geological structure of its crust, not to the extent that would enable us to arrive at precise conclusions as to the (location of mineral deposits. Available information on mineral deposits is not entirely empirical: much has been learnt of the laws of associated minerals, of the structure of the Earth's crust. It is probable that there will come a time when we will be able to construct models of the geological and geochemicail evolution, to identify mineral deposits by a system of coordinates, calculate their potential and identify the minerals stored. We will be progressively approaching that goal, and one day we will be there---but that is a very long-term prognostication.

An engineering designer, a production technician, a test engineer---anyone developing some kind of know-how, looking for optimum performance characteristics and operating condition---may be far removed from the Laplace's Supreme Reason which knows the exact coordinates and velocities of every particle in the Universe and predicts every detail of its future condition. A man in any one of

188

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

189

the above categories can have the knowledge of a mere thousand facts, a thousand degrees of freedom, a thousand performance characteristics comprised in the initial data. He can use that very limited amount of initial data to arrive at very accurate information on the optimal characteristics of a manufacturing process by triggering off a million logical and mathematical operations. A geologist or a geochemist desiring to obtain a particular piece of "know where" is in a very different situation. His task is to achieve a much closer approach to the Laplace's Supreme Reason, with the difference that the latter knows where the atoms of the Universe are at this particular time, whereas the geologist has to find that out for himself, though not for the entire Universe but for the Earth alone. In principle, this is practicable: imagination suggests a computer which accepts all available information on research findings and pinpoints themost probable coordinates of any mineral deposit required. At present, however, "know where" is still highly empirical, based on enormous human investment and extremely costly. Accordingly, one of the technological challenges of the next several decades will be to achieve a partial substitution of ``know-how'' for "know where". An example of this substitution in this century is provided by the utilisation of thorium in preference to uranium. Another example, one that probably goes well beyond this century, is thermonuclear reactions and utilisation of deiterium. In such cases advance to a fresh technical scheme, sometimes to a novel physical cycle, puts to work resources of which we have more knowledge and which are not threatened by relative depletion.

Thus, relative depletion in most cases results in the greater cost of "know where", of information per weight unit or kilowatt-hour. The unit cost of ``know-how'', on the other hand, is destined to go down, with a sympathetic increase in the amount of ``know-how''. Cost-wise, i.e. in terms of materialised labour, information is graduating to a position where it can be related to the principal branches of a national economy. Information is becoming a part of the overall pattern of the division of labour, a part of the main structure of production.

Let us now consider this structure and its dynamics.

The two streams of information discussed above--- ``know-how'' and "know where"---provide no answer to the critical question: "What for?" What for do we want all this information? What for do we want to develop new designs and processes? What for do we want to identify and exploit fresh sources of energy and new mineral deposits? This is far from a metaphysical question. It is a a vitally important economic component of the problem of the meaning of Man's life, of the meaning of Man's evolution, of civilisation and progress. The closing essay of this book will discuss this general problem in somewhat greater detail. At this point we want to answer the question "What for?" insofar as it bears on the rate and trend of scientific and technological progress for no discussion of production optimisation or of its optimal dynamic characteristic could be undertaken unless that question is adequately answered.

In any optimal construction or manufacturing process, i.e. in any pattern of geometrical, physical, technical and economic variables, such as distances, time intervals, mass, velocities, acceleration, voltage, temperature, pressure, unit cost, etc., none of these variables is an end in itself which the constructor seeks to maximise. The maximum is reserved for some particular ultimate function of the sum total of these values. Corresponding to the maximum value of such function are the optimal values of these variables, an optimal manufacturing process. The question "What for?", from which Man's purposive, productive activity is inseparable, is dealt with here as a problem in variation calculus, as that of finding the maximum or minimum value of the end goal. A new construction is introduced to obtain maximum efficiency for a specified input or maximum speed (acceleration, payload capacity) per unit of power, etc. Expressed in general terms, the end goal can be identified as greater negentropy with sympathetically greater entropy.

Let us apply the test of the same question, "What for?", to a national economy as a whole. It seems logical to treat consumption as the end goal of productive activity and to consider consumption level as that function which the

190

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

191

optimisation of production seeks to maximise. However, this cannot refer exclusively to food, housing or the entire field of what is termed non-productive consumption. A structure of economic production that achieves maximum satisfaction of material needs in the above category at the cost of minimum investment in energy production, metal processing and engineering industries would be able to provide a very short-lived prosperity and cannot, accordingly, be considered as an optimal structure. It follows then that one must proceed from overall consumption including the production-oriented consumption of raw materials, energy and machines. This type of consumption is merely another name for economic production, its structure is another name for the structure of economic production; thus, we are again forced to look for a variable of which the maximum value would represent an optimal ratio of investment in the various fields of national economy, i.e. an optimal structure of an economy.

The productivity of social labour may be taken as the end goal of a national economy. A production system structured to assure high productivity of labour rules out the chance of consumption falling off after a brief period of plenty: it has adequately developed power generating, metal processing, engineering and mining industries to assure a lasting economic stability under conditions of plenty. On the other hand, high productivity of labour presupposes a high level of individual consumption. It may be suggested that high productivity married to an assured level of technology may be taken as the end goal of economic production: an optimally structured economic production corresponds to the maximum productivity of social labour.

Still and all, is an unchanging productivity the end goal of economic production? Can Mankind content itself with such a goal? Even the highest productivity, unless it keeps rising, cannot satisfy Man after his condition and the effect of his productive activity have undergone a radical transformation within the lifetime of a single generation. Today, the end goal rate of economic production must be related not only to productivity of labour but also to the rate of acceleration of productivity, to an index of tech-

nological and economic progress. For an economic structure, this means that a new major field of activity, production of technological and geologico-geographical information, comes into play.

Even that, however, is not enough for the man of the latter half of the 20th century for he is a contemporary of a revolution in both technology and science. And that revolution in science, significantly, has led since the midcentury to an acceleration of progress. Accordingly, the indices of productivity and of its growth rate must be supplemented with the index of growth acceleration rate. The end goal of stationary or quasi-stationary economic production, one that developed slowly with changes unnoticeable within the lifetime of a single generation, could well be the level of social wealth. The end goal of uniformly developing economic production, or one marked by occasional bursts of accelerated growth, includes growth rate. In the atomic age that end goal is represented by the rate of acceleration of progress.

To sum up, the answer to the question concerning the end goal of economic production and its dynamism points to a certain variable which comprises the productivity of social labour, the growth rate of productivity, and the rate of acceleration of such growth. It is not my intention to specify the manner in which these components make up the end goal of economic production otherwise termed the fundamental economic index. The aim of every change in the structure of investment, of the net national product and of consumption consists in the maximisation of that index. Specifically, that is the aim of investment of science, development, production research and engineering design programmes (``know-how'') and in prospecting ("know where").

It will be seen that the question "What for?", when related to economic production and to its dynamism, provides a programme for another stream of information: information on the effect of each particular economic action, each particular instance of redistribution of intellectual and material effort and resources. What is required is an answer to the questions: What is the effect of a particular action? To what extent does it raise the living standard?

192

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

193

To what extent does it promote our mastery of the forces of nature? How does it contribute to an accelerated economic growth? To put it in other words: what is the effect of each action in terms of maximisation of the fundamental economic index? Thus, the two streams of scientific and technical information (``know-how'') and of information on natural resources ("know where") are supplemented by a third stream of economic information---"know what for''.

The importance of this last kind of information cannot be emphasised too much. This is the very meaning of scientific and technological progress which, at this stage, consists in progressive utilisation of atomic energy. Hence the term "atomic age". Peculiar to the atomic age are new manufacturing technologies, quantum electronics, controlled organic evolution, a major shift in the conditions of labour. The atomic age may be described as the age of cybernetics: it is also an age of information. However, any definition of the atomic age must be supplemented with the fundamental definition of the end goal and effect of all of the above trends in science and technology. The main feature is the economic information that permits optimisation of the economic production structure. The scientific and technological potentials of the atomic age are so great that the problem of their optimal realisation has grown into a cardinal task confronting Mankind. For this reason, the atomic age is an age of economics.

Obtaining maximum advantage from scientific and technological trends is a task in economics and econometrics. Specifically, the task is this. Computers are used at regular intervals to identify optimal changes in economic production structure and consumption to fit the probable effects of developing trends in science and technology. The factors taken into account include the effect on the economic structure of atomic power generation, of new technological developments and electronic automation technology; the extent of changes in family budgets, the new cultural needs that will be served by such budgets, and the required rate of development of scientific research, geological exploration programmes, experimental and theoretical studies in fundamental problems. An optimal course of action

is selected from a multitude of alternatives, which assures the maximum fundamental economic index, i.e. maximum productivity, growth rate and acceleration rate. In a continuum of n dimensions wherein every point is identified by n coordinates, every such point may be regarded as describing an economic production structure: the coordinates of a point are the volumes of each of the n planned branches of economic production. By adding another dimension, time, we obtain the n+l dimension of the continuum of a dynamic structure. Any progression from one point to another, identified by a different set of coordinates, is an advance to a new structure, an alteration of the economic production structure. By connecting the several points corresponding to the next several years, a curve is obtained which describes the dynamic progression of the structure over these years.

The trends in economic production, that lead to greater productivity, to an accelerated growth of productivity could not be identified without reference to such curves, to prognostications for the coming years. Accordingly, a dynamic optimisation of economic production, an optimisation that has for its objective not merely growth but an accelerated growth, i.e., an accelerated productivity of social labour, an optimisation that takes account of trends, that seeks both an optimal state and an optimal dynamic progress, makes prognostications an imperative.

It has already been shown, however, that prognostications based on scientific breakthroughs cannot be defined in any one way to the exclusion of all else. There is a fairly obvious relationship between the scope of scientific problems and the uncertainty of their economic effect once they are solved. A check test carried out in an industrial laboratory yields a result allowing of only one interpretation---that the specified standards have been met. Engineering development of new machines and manufacturing processes has the effect of bringing the equipment closer to the ideal cycles, yet the extent of such approximation cannot be worked out with any measure of precision in advance. Clearly, technical development projects contribute to accelerated technological progress, yet the specific results of each project are definable only in retrospect.

13-01545

194

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

195

In fundamental research, the uncertainty of results is increased. Here, an experiment, instead of arriving at some unexpected answer to the query addressed to Nature, may show the query to be nonsensical. It may generally be assumed that the effect of a research effort increases in uncertainty where it is more profound and more dynamic and where it affects a component of labour productivity of a higher order---speed, acceleration, rate of acceleration. It is precisely these components---productivity of labour, growth and rate of acceleration of productivity--- that go into the making of the fundamental economic index.

Hence the inevitability of a multi-stage optimisation of the economic structure. Every prognostication derives from a local statement of fact. Yet, by reason of the uncertainty of the effect of various trends in science and technology, after the passage of a period of time the need arises to redefine these trends, to make a new prognostication as to their effects, to make a fresh comparison of diverse alternatives, and to select the optimal course of action. Modern science, capable of assuring a continuous growth and even an uninterrupted rate of acceleration for labour productivity and requiring derivatives of labour productivity be included in the end goal of economic production, raises the need for multi-stage (and virtually uninterrupted) optimisation.

It will be noted that every fresh instance of adjustment in a prognostication or in the optimal dynamics of economic production will involve a certain set of changes in energy production and manufacturing technologies, transportation, communications, information, science, consumption and culture, changes that will be timed to meet a specified common deadline. Modern optimisation of production is a package of optimisation measures embracing large-scale, long-term sets of foreseeable changes, closely interrelated and timed to meet a more or less common pre-selected deadline. Today, a set of such changes is denned as the practical embodiment of non-classical science. As was noted earlier it includes atomic energy production as the dominant component of the power balance, and some other prognostications.

To conclude, a few more comments on why information is destined to become one of the main fields of national economy. The principal reason for this development is to be found in the nature of non-classical science, in its inherent tendency to search for and establish harmony, negentropy and order in the microcosm. This feature of non-classical science has been discussed earlier. Classical statistics disregards individual microscopic bodies and events. Accordingly, what establishes order in energy production and manufacturing technology is, within the classical framework, essentially macroscopic. Let us look at some arbitrarily selected series of production processes each of which determines the nature of the immediately following process. To take the example of coal, it is first extracted and raised to ground level, then loaded in railroad cars and hauled to a power plant. Here, the negentropy of the chemical energy concentrated in the coal is transformed into the negentropy of a temperature gradient between the boiler and the condenser at the power plant and, subsequently, into a difference between electric voltages. As power is transmitted via a high-tension line, so is the negentropy which determines the manufacturing processes in the consumer industries. The law of the preservation of energy guarantees only the equivalence between the initial and terminal values of energy. The kind of processes occurring at every stage of the passage of energy from the coal mine to the consumer is determined by negentropy, the ordered regularity, the gradients, rather than by the energy itself. Robert Emden, in his provocative article "Why Do We Have Winter Heating?", wrote: "As a student, I read with advantage a small book by F. Wald entitled: The Mistress of the World and Her Shadow. These meant energy and entropy. In the course of advancing knowledge the two seem to me to have exchanged places. In the huge manufactory of natural processes, the principle of entropy occupies the position of manager, for it dictates the manner and method of the whole business, whilst the principle of energy merely does the bookkeeping, balancing credits and debits."""

* R. Emden, "Why Do We Have Winter Heating?", Nature, Vol. 141, No. 3577, May 21, 1938, p. 908.

13*

196

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

197

The picture is particularly clear in a factory of manmade processes, i.e. in the industrial field. The course of events is controlled by the law of entropy which channels the heat from the furnace to the condenser and transforms it into mechanical work, while entropy in a wider sense controls other processes, too. The specific operation of entropy depends on the initial gradients, i.e., on negentropy. Negentropy passes from process to process, thus bringing order to economic production.

Classical physics involves macroscopic negentropy. In classical physical applications, the nature of productive processes is determined by the macroscopic ordering of being, by macroscopic concentrations of mass and energy. Accordingly, all forms of classical negentropy involve the transmission of large amounts of energy or the Dandling of great masses. The productive processes in the above example of coal are tied into a whole by the raising to ground level and the carriage by rail of large quantities of coal, and by the transmission of large amounts of power.

An ordered series of productive processes by-passing the transmission of large amounts of power and the handling of large quantities of mass would be made possible by the use of information which in ``classical'' forms of economic production always involves a human intermediary. In ``non-classical'' production the handling of information could be done by automatic devices, that is by inclusion of processes serving to carry very small amounts of energy and very large amounts of negentropy in the chain of interconnected production processes. An illustration is provided by production control through the use of a relay connecting high-power circuits, servomotors, etc. More complex instructions require the transmission of minimum amounts of energy and mass and of maximum amounts of information.

In this, like in so many other instances, the model for a technical concept is provided by the human brain. G. Thomson refers to the case of a man who arranges a pack of playing cards in a certain order, thereby radically changing the entropy of the pack of 52 cards from complete irregularity to fully ordered regularity. The energy

expended by the man's brain for this task is less than the amount of energy released by the combustion of a single molecule of zariffin---6.4 • 10~^^12^^ erg.*

An increasing use of processes approaching this ratio between energy and negentropy, though as yet in a very limited degree, will constitute an important trend in late 20th-century progress. I refer to ``actuators'', transmissions of very small amounts of energy to actuate events involving large amounts of energy. What happens here is that a process which does not essentially consist in the transmission of energy per se (human speech, radio signals, facsimile image transmission, etc.) triggers off full-scale energy processes. The pattern of the process is embodied or ``encoded'' in a different scale of dimensions, in phenomena of a different physical nature, in radically different space and time units, to become the actuator of specified programmed events.

For such large-negentropy processes involving the transmission of small amounts of energy to be able to transmit information in very complex schemes, use is made of the dynamic instability of electrons in vacuum and in crystal lattices, in other words, active interference with, and investigation into, natural events was practised on an infinitesimal time and space scale. This is precisely the approach that substituted the transmission of information for the transmission of energy and the handling of mass. The term ``information'' as used today applies not merely to the behaviour of macroscopic bodies, but also to the behaviour of individual particles. The reader will remember from the previous discussion that the quantum uncertainty of this behaviour has little to do with the classical uncertainty of the behaviour of individual particles in statistical sets.

The ordering of both the macrocosm and the microcosm is becoming the immediate and obvious aim of economic production. This concept clears up a very common misunderstanding.

One of the principal prognostications discussed in this book is a continuous growth and uninterrupted accelera-

G. Thomson, The Foreseeable Future, Moscow, 1958, p. 48.

198

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

199

tion of the rate of growth in the productivity of social labour. How long can this acceleration go on? Assuming that the construction of buildings, the production of cars, food, clothing, TV sets, and so on, grows at an accelerating rate for a sufficiently long time, it will be seen that at some point the limited area of the Earth's surface will not be able to accommodate the swelling stream of goods, with the result that the Universe will witness the spectacle of a river of all sorts of man-made products being discharged at a continuously faster rate into space---and this situation can be prevented only by the law of the preservation of matter. Ridiculous as this picture might be, it reflects a serious obstacle to the prognostication for a dynamic economy. Some economists believe that economic growth might be followed by a period ^of zero growth. Insofar as the post-atomic period, the post-atomic civilisation, is concerned, the subsequent essays of this book suggest a different prognostication. The atomic age is characterised by uninterrupted acceleration, the postatomic age---by a continuous growth in the rate of acceleration, a non-zero third component of labour productivity.

Referring to the shadow of a universal stream of manmade products, it will be remembered that the developing economic production is the production of negentropy, an ordering of the world, both macroscopic and microscopic. The number of ordered particles constituting the hydrosphere, lithosphere, atmosphere, etc.,---the nearest fields of Man's ordering activity---is virtually infinite. Accordingly, the number of initial conditions determining the structure and dynamism of the noozones, primarily, a rationally reorganised hydrosphere, lithosphere, that is, the noosphere, is also infinite. We have seen that this concept, first introduced by V. I. Vernadsky, is being generalised: today, rational transformation covers areas far removed from the Earth, the noosphere has gone far beyond this planet, ordered complexes of particles and waves, including radio waves which go out into space, extend over astronomical distances. On the other hand, the negentropic ordering of life which conquers more and more of the concentric layers surrounding the Earth also

extends to an increasingly larger number of spectral zones and to new kinds of discrete fundamental particles. The growth of negentropy is limited only by the astronomical time scale, in other words, it may be virtually infinite even if accelerated at a constant or an increasing rate. The growth of negentropy embraces the microcosm: chromosomes ordered by "spot sight" radiation, coherent beams in the optical, ultraviolet and X-ray bands indicate that progress is inexhaustible. Thus, we are back to the previously discussed link between optimism and the infinite potentials of knowledge and transformation of the world.

It will be noted that the modern conception of information gives one more dimension to the concept of noosphere, which was not to be found in the old meaning of the latter term. The old concept was reducible to the statement: the structure of a certain portion of the Earth's surface and subterranean depth represents the results of a specified number of years of Man's rational activity. Today, the term noosphere is also used to convey a prognosticative meaning: information on the eventual noosphere, on foreseeable changes in the structure, surface and subterranean depths of the Earth to be worked by projected economic undertakings has become a significant factor for every aspect of modern life. These changes are destined to create genuine noozones, zones of reason--- atmospheric, hydrospheric and lithospheric structures which are to be precalculated for optimum human benefit. Such projections are becoming a vital element of "know where", and, consequently, of ``know-how''.

DE RERUM NATURA

One of the characteristic features of our time is the narrowing gap between the most general epistemological tasks of science and applied problems: the search for a general answer to the question of the nature of things acquires a direct bearing on economic development and on its acceleration. The range of practically necessary in-

200

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

201

formation which creates noozones and contributes to the greater negentropic ordering of the world increasingly extends to information on progressively more general universal laws. Along with this type of information, the ideals of science come to be included in the factors promoting progress.

Let us discuss science in purely epistemological terms, as a process of explaining observable phenomena, without reference to its practical aims. This word, ``explaining'' is subject to changes in intended meaning: it is never reduced to a phenomenological reference to other, proximate events, nor does it ever actually pinpoint the ultimate cause of phenomena. Any changes in the intended meaning of the word ``explaining'' are related to an altered scientific ideal. An ``ideal'' is an immanent impulse to the development of science, it is what science seeks to achieve. A special essay at the end of this -book will be devoted to the problem of purpose in science, to science as a purposeful activity, to the relation between science and a purposive transformation of the world. At this point, we are concerned with inner immanent impulses, with scientific ideals.

Each epoch in science is characterised by certain ideals for a physical account of the world. The Einsteinian " inner perfection" of physical theories is measured in terms of the relation between such theories and the fundamental principles of such an ideal account of the world. A prognostication for the year 2000 must answer this question: What is the ideal of scientific explanation that we shall seek to achieve during the closing decades of this century? There is very little that can be postulated by way of the foreseeable results of fundamental studies, yet the trend of these studies is more readily discernible: it is determined by the modern ideal of science which is becoming increasingly clearer.

The modern ideal of science differs from classical science both in content and in its apparent dynamism. Modern science sees this ideal scientific account as the world it seeks to achieve, something that changes within the lifetime of a single generation. The modern ideal of a scientific explanation may be identified by comparison with

other ideal schemes which determined the style and trend of scientific thought in the past.

The science of the past was always in search of objects whose existence and behaviour would provide the ultimate explanation of natural processes. Thus, Greek natural philosophy produced two conceptions of the world: in the first, the world was held to be comparable to water, and in the second, to sand. The first conception, based on the idea of continuity, held that the underlying cause of things consisted in changes, deformations and motion of continuous matter. The various portions of matter were either believed to possess different qualities, or all matter was held homogeneous. The second, atomistic conception generally held that the bricks of the Universe were composed of discrete portions of homogeneous matter surrounded by empty space or, in Democritus' words, by ``non-being''.

The field was first held by continuity and qualitative diversity: the Aristotelian elements---portions of continuous matter exhibiting different qualities---were seen as combining to form the entire diversity of the world around us. The scientific ideal consisted in reducing that diversity to combinations of four basic elements. This conception survived until the 17th century. A different view of matter is offered by the atomistic concept. The ideal of this concept is to reduce all phenomena to a spatial arrangement of particles which are devoid of any qualitative characteristics. This conception of matter and the corresponding ideal of a scientific account of the world have had an enormous effect on world culture, on the needs of Man as he learns about and conquers the forces of Nature, and even on prognostications---conceptions of the future of science down to prognostications of modern times. The extant fragments from Democritus and Epicurus and that masterpiece of poetry and scientific thought Lucretius' On the Nature of Things (De rerum natura) have come down through the ages not only as a monument to thought that is looking for the ultimate principles of being, but also as an impulse accelerating this search. Modern thought is looking for fresh fundamental principles of existence fully aware that they are not ultimate-

202

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

203

What this means is that the poem by Lucretius, and all it stands for, will, for all times, retain its impulse-giving, accelerating value for human thought which seeks to get down to rerum natura, the nature of things.

The ideals of the ancient atomistic conception were again held up by 17th century science to provide the basis for the classical concept of substance and for the classical ideal of science. Of course, Descartes' answer (substance is identical with space) eliminated empty space, which had been present in the ancient atomistic conception, from the picture of the world. This, however, produced little change: if atoms exhibit no qualities they are hard to distinguish from the surrounding empty space. The problem involved is the same problem that defeated Descartes in his attempt to establish a boundary line between one particular body and other surrounding bodies. Attempts were made by Leibnitz and Newton to overcome this problem. Leibnitz ascribed to bodies certain dynamic properties which distinguish the latter from space: bodies possess inertia, i.e., they resist forces seeking to upset them in their current states, they are capable of affecting the state of other bodies, while portions of space have no such capability. That is the difference between substance divided into the discrete bodies of homogeneous matter and space.

Newton used the concept of force to evolve a concept of interacting bodies. All natural processes are reducible to rearrangement of bodies and are accounted for by their interaction. To identify the underlying basis of all processes becomes the classical ideal of science, what Einstein termed Newton's programme.

Descartes, Leibnitz and Newton used the interaction of bodies to explain their behaviour, locations, pulses and acceleration. Insofar as the existence of bodies was concerned, it was put outside the pale of the physical science and accounted for by a metaphysical process. Spinoza was the only thinker of the 17th century who sought to establish the existence of bodies within the framework of physics. Spinoza saw Nature as the cause of its own existence (causa sua), as something interacting with itself and re-

quiring no outside cause of its existence. This conception failed to be embodied in classical physics. We are going to be concerned with the idea of the Universe as the raison d'etre of each of its component particles in a subsequent discussion of modern non-classical concepts. At this point it will be noted only that the Spinozian concept is in the mainstream of physical thought, even though it had to wait two hundred years to be embodied in physics, i.e., for the emergence of a theory which connects---at least hypothetically---the conception of the existence of bodies with experimental observation, and claims to account for experimental data that cannot be explained otherwise.

The account in classical science of the existence of bodies is that the more elementary bodies joined to form a single body. The properties of the latter are explained in terms of the structure, composition, arrangement, interaction and motions of its component elements. This account, however, consists in a reference from one stage of the structural hierarchy of the world to another. The line of reasoning terminates in an answer that puts the existence of elementary, indivisible particles outside the framework of a physical account.

In the classical account any changes in the location of bodies, any displacement or acceleration make no sense without reference to their properties of substance. Yet, here is the big question: What produced these properties? Classical science offers no answer to that question.

Leibnitz and Newton, as has been noted earlier, ascribed to matter a property that distinguished it from space ---the ability of portions of matter to interact with one another. Boscovich considered particles to be non-spatial centres of forces. Interaction permitted mass and charge to be determined physically, experimentally. Faraday ascribed properties of substance to interaction: force was represented as a flexible tube and particles as the ends of such tubes, or special points in a field of forces. Maxwell's theory freed the field from bodies altogether: all electromagnetic forces---closed vortex lines in an electromagnetic field---were shown to be present and to move in a space

204

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

205

free from conventional bodies having a mass and carrying a charge.

And yet, none of these classical answers to the problem of substance, of the difference between matter and space, of physical existence as opposed to behaviour, went beyond the limits of behaviour or effectively solved the riddle of existence. The interaction between a specific particle and other particles was expressed in terms of a particular trajectory, velocity and acceleration of the particle depending on the field. Today, we would express the idea by saying that the world lines of interacting particles are deflected, in one sense or another, by the interaction. Yet here again, we are up against the same chain problem which emerges at every attempt to construct a geometrical picture of the world: How does a world line differ from a geometrical image? What fills it? What are the non-geometrical events occurring on a world line?

Neither classical science proper, nor the relativity theory afforded an answer to these questions, yet the presence of the problem had been long realised. Actually, the spontaneous deviation of particles from their macroscopically predetermined courses---the clinamen of Epicurus and Lucretius---were introduced to guarantee the genuine existence of the atom. The Epicureans went beyond that: they conceived the idea of not merely "a mutiny" of the atom but also of the alternate destruction and resurrection of a particle along its path of travel. Alexander of Aphrodisias wrote in the early 3rd century A.D. that the Epicureans believed that "there is no motion, there is only the result of motion", i.e., that by disappearing and then reappearing in other cells of discrete space, a particle, as it were, travels on.

Why do we go two thousand odd years back to Epicurus in a book on the year 2000? The reason is to be found in the revolutionary nature of the prognostication for that year in fundamental knowledge. The more radical the foreseeable advance to new concepts, the more radical is the related retrospective re-evaluation of values, the deeper is the layer of the conventional ideas of the past revised by present-day thought. In so doing, it alters no-

tions that seemed inviolable for thousands of years and discovers moot points, contradictions and questions addressed by the past to the future.

So, what is the novel element today that permits to reevaluate established notions? What is the content of the radical remaking of the style of fundamental research and of the new scientific principles which carry the embryo of the new, post-atomic civilisation? The starting point for the new revolution in science is the theory of elementary particles. Today, this is not a surprising statement. What appears as elementary particles in any particular epoch is the basic link in that time's conception of the world. For more than two thousand years, elementary particles were called atoms and were visualised as consisting of homogeneous matter devoid of quality. With time, atoms broke up into protons, neutrons and electrons, all differing in mass, charge and life-span. These were supplemented by new types of particles today numbering several dozens, or maybe hundreds. The next stage in the theory of elementary particles will be the systematisation of known and of yet-to-be-discovered particles. The future system will probably differ fundamentally from Mendeleyev's Periodic Table. The physical deciphering of the Periodic Table is classically structural: atoms differ in the number and arrangement of subatomic particles. It is not likely that particles called elementary today will be found to be structures composed of smaller units. More likely, though, their differences will come to be seen as an expression of bonds, varying in nature and intensity, with other particles and, perhaps, with the Universe generally.

In the middle of this century, investigation into cosmic rays and fluxes of high-energy particles generated in nuclear accelerators led to considerably expanded knowledge of elementary particles. What happened was not merely that the known elementary particles increased in number: this increase has posed some essentially basic questions before science. These questions are far from having been adequately answered, and it is with a very mixed feeling that a modern physicist looks upon the swelling of the table of elementary particles. What he is confronted with is, on the one hand, a virtually uninterrupted exten-

206

PHILOSOPHY OF OPTIMISM

PART T'WO. SCIENCE IN THE YEAR 2000

207

sion of his conception of the bricks of the Universe, i.e., an expanding knowledge of fundamentals. Once discoveries were turning points heralding new epochs or at least new long periods, e.g., the discovery of the earliest fundamental particles---the electron, proton and photon, now they come in a steady stream. This development is partly encouraging, yet at the same time---and that is the second aspect of the problem---frightening. The reason for that feeling is that as various types of fundamental particles increase in number, there is a proportionate recession of both the classical ideal, an account of the world based on the motion of particles of homogeneous matter, and of the account of the Universe in terms of the motion of its elementary ``bricks''.

The case, however, involves a still further aspect, a third component of the sensation induced by the continued arrival of newly discovered elementary particles. Parenthetically I would like to say that these "components of sensation" are actually prognostications for the further development of the theory of elementary particles. This third component is represented by the lurking suspicion that the ``brick'' image is not valid, that the Universe is not made of ``bricks''.

It is the objective of this essay to set forth some tentative hypotheses illustrating this component, this prognostication for future scientific evolution. We are concerned here less with physical than with historical and physical hypotheses---our expectations for the emergence and evolution of physical concepts rather than the probable structure of the Universe. Clearly, historical and physical concepts provide some notion of the actual structure of the Universe; yet the above reservation has a certain validity: a particular historical and physical concept may be pretty arbitrary, yet reflect an actual trend in scientific thought. What we are trying to establish is whether a further evolution of scientific research is possible, which would not merely multiply or reduce the number of "bricks of the Universe" but discard that notion altogether.

The bricks of the Universe postulated by classical science, from its ancient atomistic prototypes to the classic constructions found in modern science, were just such

basic notions. The atoms of Democritus and their later modifications developed by Gassendi and other thinkers of the new age, the non-permeable bodies of Cartesian physics, the dynamic centres of Boscovich, the charges present in an electromagnetic field, the fundamental particles (apart from their annihilation and generation)---all these conceptions were calculated to meet the problem of the behaviour of the elements of being rather than of their existence.

There are reasons to believe that the overall trend in further scientific evolution is going to be an emerging effort to discover an explanation for the existence of empirically observable types of fundamental particles, an explanation of why they exhibit their particular masses and charges rather than some other masses and charges, i.e., properties distinguishing one type of particles from another.

The problem of existence of particles should be approached by taking cognisance, first and foremost, of the mass and charge exhibited by a particle of each class: if these properties were to be ignored, a particle would be indistinguishable from the point it occupies at a given moment in time. Any changes in charge and mass are transmutations of the particle from one class to another. The transformation of electron and positron pairs into photons and vice versa do not consist in transition from one world point to another----this event falls outside of the conception of moving identical particles. Transmutations are foreign to the style of classical science which visualised Nature in terms of time and space models of the behaviour of indestructible particles. Today, science is revalidating the Epicurean concept set forth by Alexander of Aphrodisias: there is no motion in very small areas but only the "result of motion", only displacement caused by the annihilation and re-birth of a particle of a given class. Clearly, ``re-validation'' as used here is not to be taken to mean mere repetition: science needs neither to modernise the old, nor to make archaic the new. In revalidating old notions, science does not go back to old answers, but rather it picks up old queries, meeting them with new answers.

208

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

209

The fresh opportunity permitting an answer to a question posed two thousand odd years ago is provided by the observation of, and manipulating, .strong interactions.

One of the concepts of modern physics consists in a hierarchy of increasingly stronger interactions. For our purposes, we can content ourselves with just two links in that hierarchy which opens with ultra-weak gravitational interaction and goes on to weak and electromagnetic interactions to terminate in strong interaction. Electromagnetic interaction is the interaction of all electrically charged particles with the electromagnetic field, i.e., with photons. The intensity of a field is characterised by the number Vi37» whose nature has given rise to a multitude of conflicting notions, but has not yet been clearly established. An initial, and somewhat vague, understapding of the number can be gained by looking upon it as a measure of the ``non-Cartesian'' effects of interactions, i.e., effects which are not reducible to changes in the behaviour of identical particles. The greater the constant indicative of the intensity of an interaction, the shorter the duration of the interaction, and the greater the probability of such interaction causing transformation of the particle into one of a different class, rather than producing a change in its behaviour. The constant describing electromagnetic interaction is very small. Accordingly, electromagnetic interaction leads to transmutation in fairly rare instances compared with strong interaction; the transmutation occurs only if the interacting particles are of relatively low energy levels. Strong interaction is characterised by a much higher constant and occurs over a time period of the order of 10~^^23^^ sec., i.e., millions of times faster than electromagnetic interaction which has a time span of 10~la to 10~^^17^^ sec. and results in transmutational events.

Transmutational events generally occur when particles have very high energies, i.e., when they travel at high velocities. Accordingly, investigation into the nature of transmutations requires that the interacting particles be accelerated to high velocities. Transmutational events may also occur in electromagnetic interaction: where the photons have very high energy, in excess of the rest mass of an electron and a positron taken together, the photons

will be transmuted into electron and positron pairs. In this case, the relations inherent in the relativity theory lead to more than the need to take account of a certain change in the particle mass dependent on its velocity. Here, mass corresponding to kinetic energy is of the same and even of a higher order than the rest mass of new particles and becomes transmuted into a rest mass. New particles are created where the energy of available particles is greater than the rest mass of generated particles proportional to their rest mass.

Events of this kind go beyond the framework of the relativity theory as an exposition of the world lines of immutable bodies. These events may rightly be termed not merely relativistic but ultra-relativistic. The advance from a relativistic to an ultra-relativistic world is an advance from the behaviour of identical particles of a particular class to the existence of a particle of a given class, its creation or annihilation, i.e., to the transmutation of a particle of a different class into one of a particular class in question or vice versa.

That is a revolutionary development. If the existence of a fundamental particle of a certain class were to be explained by the arrangement of some sub-particles, the case could be made for the existence of another link in the classical atomistic pattern. The reason for the existence of a molecule is explained by an arrangement of atoms, that of an atom---by the arrangement of elementary particles, and now we explain the particle existence by an arrangement of sub-particles. These are all structural explanations reducing the existence of a galaxy, a planetary system, a star, a molecule, or an atom to their respective inner structures. A structure may be static in the classical sense (a complex of spatial distances between positions of bodies in a system, said positions being precisely defined for every moment of time); it may be relativistic (a complex of four-dimensional intervals); dynamic (a complex of forces reciprocally exerted by elements on each other); quantum (the distances between the elements are not open to precise determination, the precision in determination of distances decreasing with increased precision in the determination of particle interactions and impulses). However,

14---01545

210

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

211

where we deal with an elementary particle it cannot be said that its existence is reducible to its inner structure.

It might be suggested that it could be accounted for by an appeal to a combination of interacting particles characterised by a larger mass. In 1964, Geli-Mann and, simultaneously, Zweig, suggested the existence of certain particles of a very large mass, which were termed quarks after the fictitious beings in J. Joyce's Finnegaris Wake. Each particle involved in a strong interaction (and that applies to the majority of particles) consists of three quarks. How can it be possible that the mass of such a particle is much smaller than that of its constituent quarks? The answer is to be found in the mass defect referred to above: the formation of a particle is accompanied by the release from the quarks of a very large amount of energy and such composite particle has an accordingly much smaller mass than its constituent quarks. If the quark hypothesis corresponds to reality quarks should be found in a free state although such instances will be very rare. Most of them should be "burnt out", i.e., they should have combined to form three-quark units ---particles having varied but always smaller-than-quark masses.

With the quark hypothesis, physics was launched upon a fresh journey: it now constructs systems using elements that are larger and not smaller than the resultant systems. Actually, physics started along this road much earlier: in 1949 Fermi and Yang .suggested that a nucleon and an anti-nucleon could form a particle of a mass considerably smaller than those of either of them. How far can this hypothesis be extended? M. A. Markov studied the problem of the limiting boundary of such construction, arriving at the idea of a maximum-mass fundamental particle---a maximon. This would be a particle of giant size in terms of the microcosm. M. A. Markov suggests that maximons may have been compressed into known particles of much smaller mass by an event which accounts for certain astronomical phenomena. This is the gravitational collapse, a phenomenon to be discussed later in connection with the prospects for space exploration. This

event occurs in a region where matter is compressed to a density many times greater than the density of atomic nuclei. Gravitational collapse may trigger off a fast, virtually instantaneous, process of further compression of matter under the action of the forces of mutual attraction of particles.

Gravitational collapse results in enormous mass defect, an enormous difference between the sum total of the masses of maximons and the mass of the particle into which they are compressed. For gravitational collapse to be triggered off, however, there must exist a density of matter which is not found on the Earth. Such conditions may have existed when the Universe of today was compressed into a fairly small nucleus---the starting point of expansion that started at one time and is still going on. Thus, the point of departure of the growth of the Universe coincides with that of the genesis of today's particles.

The above brief remarks on certain hypotheses in modern physics were made with a purpose. The new stage in the development of theoretical physics whose beginning we are witnessing exercises an influence on scientific progress---it accelerates the rate of the advance and, in the last analysis, increases the rate of acceleration of the productivity of social labour---not merely by virtue of its positive concepts, but also by the style of scientific thinking. This relationship would be unthinkable without some sort of psychological evolution, without a greater plasticity of cognition. At this point, it cannot be conclusively claimed whether or not quarks or maximons will be proved experimentally. But whatever their destiny, these hypotheses are already performing an important function for civilisation: they render the thinking of men (and not professional scientists alone) about Nature more flexible, thus accelerating their understanding of science and contributing to the greater effect of its dynamic style on modern culture. It is precisely this feature that permits physical constructions, which are neither unambiguous nor claim to be so, to be placed outside an esoteric framework.

14*

212

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

213

The foregoing discussion is intended to prepare the reader to accept new and equally variable (possibly more variable) hypothetical constructions. These constructions are designed to provide a relatively graphic demonstration of prognostications which have become so common to physics as development representative of the new ideal of science. This ideal is comparable to the classical ideal ---the reduction of rerum natura to the motion of identical indestructible particles, one that is comparable to the classical ideal in its generality, providing a unified basic conception to embrace all Nature. The deeper the theory of fundamental particles goes, the stronger the conviction of the need for a new conception of rerum natura.

In one of Voltaire's writings, Descartes tells God that he can create a new world identical to the one cheated by God provided he (Descartes) is given the requisite matter and told the law of its motion. In-fact, Descartes was not alone in that idea: classical science as a whole was willing to give an account of the Universe provided the existence of matter and the laws of the motion of its discrete particles were given as points of initial departure for the investigation. Modern science provides initial data of a similar nature: these are empirically established constants. For ever since physics was quanticised, since science has both observed and measured physical events the ideal of scientific explanation has been the achievement of a minimum of purely empirical constants. At the close of the 16th and in early 17th centuries, Kepler sought to deduce the average distances between the planets from purely geometrical factors. Kepler believed that by describing a regular octahedron about the sphere of the planet Mercury and by subsequently embracing such octahedron in a spherical surface, the sphere of Venus could be arrived at; that the sphere of the Earth could be found by describing a regular icosahedron about that sphere, and that by this procedure, i.e., by describing regular polyhedrons, the spheres of all the planets can be obtained.

That was clearly a fantastic attempt, yet the question "Why is the world the way it is rather than something else?" remained valid. At every stage of its development, physics, both classical and non-classical, sought to elimi-

nate purely empirical constants to relate them to other factors, to give a causal account of every effect, to bring order to the picture of the world in which every constant would follow from the general conception of the Universe. In his autobiography of 1949, Einstein referred to that trend, to a fundamental physical theory which would be free from purely empirical constants and in which all constants would follow from a unified scheme providing a single answer to the problem of universal harmony."" Speaking to his assistant, Straus, Einstein once posed the question: "Could God have made the world different?", i.e., could the causal harmony of the Universe be expressed by some other physical constants?**

Assuming that this trend applies equally to all development of physics, the question is: What constants are now the object of the most intensive search for a causal account of the Universe?

Insofar as the behaviour of fundamental particles is concerned, modern science has achieved a fairly well-ordered conception. Two constants---the velocity of light and the Planck constant, the quantum of action---account for a great variety of processes. However, what could be termed constants of the being of fundamental particles, i.e., mass and charge which distinguish the various classes of particles, is increasing rather than declinin? in number. The specific and nearest step in our advance toward the Einsteinian ideal consists in the deduction of the spectrum of particle masses and charges from some kind of general postulates, in the conversion of the values of particle masses and charges from the empirical to a theoretically established basis.

This is precisely the task of a still non-existent unified theory of fundamental particles and of the advance from a theory of particle behaviour to a theory of their being. In terms of 17th century concepts of which a brief account is given at the beginning of this essay, it might be said that the task is one of transition from Descartes' programme to Spinoza's programme, viz., a conception of

* A. Einstein, Collected Works, Vol. IV, p. 281 (in Russian). ** Ernst Straus, Helle Zeit---Dunklf 7,ert, In memoriam Albert Einstein, Zurich, 1956, 5. 72.

214

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YK.AR 2000

21r

Nature as something created (natura naturata) to Nature as something creative (natura naturans), something that creates its own elements, interacting with itself, and being a causa sua.

How can the information accumulated over the past decades on particles, their interactions and transmutations be used to take this programme from the realm of abstract philosophy and put it within a framework of concrete physical notions, i.e., within an experimental framework? Prognostications concerned with fundamental research are supposed to answer just that question. These prognostications are present, explicitly or otherwise, in hypotheses which deduce the spectrum of particle masses and charges from some general postulate. We have seen that many modern physicists seek to bring order to the swollen list of elementary particles by treating their variety as the result of the interaction of "more fundamental" particles, possibly of a larger mass. Another trend in physics is represented by the attempt to deduce the spectrum of particle masses and charges, as well as of other variables distinguishing the various classes of particles, from more general postulates.

The non-linear character of initial interaction which is responsible for the existence of fundamental particles may be viewed as such a postulate, according to Heisenberg. In the late 1930s, an equation was written to describe the interaction of a certain universal field with itself. Solutions to that equation were supposed to identify the mass spectra of various particles. These are excited states of "parent matter" which interacts with itself. The theory treats the existence of particles as the consequence of interaction---a ``pure'', i.e., non-interacting, particle has no meaning here.

The Heisenberg conception has so far yielded no result allowing of a single interpretation: a unified theory of fundamental particles continues to be the unaccomplished ideal of modern science. The non-linear conception, however, appears to be in the mainstream of scientific progress. The classical conception of the world treated the behaviour of particles as being dependent on the existence and location of other particles, yet this dependence

was believed to be linear. The initial concept was a predetermined system of charged particles, which was visualised as the source of a field. The field acted upon a particle to determine its behaviour. The emergence of a particular field structure in response to the distribution and motion of particles was one event, one task, whereas the emergence of the kinematic scheme of the Universe and the distribution and motion of particles in response to the structure of the field was another event and another task. Classical physics dealt with these two problems separately. It might be suggested that a unified theory of fundamental particles will make a different approach to the motion and interaction of particles.

Where the existence of a particle is deduced, as is sometimes the case, from its interaction with other particles the result is a self-coordinated system which cannot be characterised by a predetermined distribution of particles each of which has an individual existence independent of the existence of other particles and of their interaction. The principle of relativity indicates that the position of a particle without reference to other particles is a nonentity. It is increasingly felt that the existence of a particle without reference to the existence of other particles interacting with it is impossible. The idea may appear to be paradoxical, nay, a vicious circle: the existence of particles is the result of their interaction and their interaction is the result of their existence. Just as paradoxical is the existence of a particle which is the result of the existence of other particles, none of them existing as an initial or independent factor. But it is precisely this paradoxical nonlinear quality that is inherent in Nature which is the cause of its own existence. This creative Nature of Spinoza (natura naturans} had no equivalent in classical physics: it was a question addressed to the future. In our time, this concept has been translated into the physical form of a self-coordinated system of strong interactions which create each of the interacting particles.

This concept of the existence of a particle as the result of its interaction with other particles was mentioned in Part I of this book. Originated by Chew and Frautschi, it applies both to strong interactions and to particles in-

216

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

217

volved in weak interactions. A particle in a strong interaction relationship, e.g., a proton, is seen as the product of dynamic effects, each such product being, in turn, a source of other dynamic effects. Dynamic effects determine both the behaviour and existence of particles.

We shall now try to demonstrate that the appeal to strong interactions in an effort to account for the existence of fundamental particles relates to another trend in thinking---discrete space and time. This age-old idea, which, as we have seen, was current in ancient times, acquired special significance in our time: it came to be seen as a possible, although difficult, escape from what was a very tricky situation. As early as the 1930s and especially in the 1940s it was realised that the application of the relativity theory and quantum mechanics to very small time and space intervals resulted in physical non-entities: it appeared that values of energy and charge calculated in terms of quantum and relativity concepts were infinite. The idea of infinite energy and infinite charge are in conflict with everything we know of the world. Yet, calculations produced precisely that physical non-entity.

In order to make clear this situation and the importance of discrete time and space as a way to escape from it, let us look at just one source of infinite values of energy. An electron can emit photons which are absorbed by the same emitter electron. The shorter the interval between the emission and absorption of a photon, the greater its contribution to the energy of the electron and, accordingly, to its mass. This ``auto-action'' of an electron produces infinitely greater values of energy and mass: if a photon can be emitted and subsequently absorbed within an infinitely short time interval and travel an infinitely short distance in space its contribution to the energy of the electron may be infinitely great. Some of the mathematical methods used to avoid infinite values are highly imaginative: they produce values which are fairly close to experimental data. These methods are characterised by what Einstein referred to as external confirmation. These methods, however, are used on an ad hoc basis, i.e., for the specific purpose of arriving at the result looked for: in that sense they lack inner perfection, they do not follow from

any general non-contradictory physical theory, they are used "on credit", in the hope that some such theory will eventually be evolved.

A theory of this kind could be based on discrete space and time. In that case, the emission of a photon and its subsequent absorption could not be accomplished within an infinitely short time interval, nor could its path of travel be snorter than a certain minimum value, i.e., a minimum time interval multiplied by light velocity. This calculation of the contribution of a photon, emitted and then absorbed by an electron, into its energy and mass could be readily placed within certain limits and the methods of eliminating infinite values would achieve physical meaning and "inner perfection''.

We should like to remind the reader of the historical prototype of discrete space and time---the Epicurean views as set forth in the 2nd century A.D. by Alexander of Aphrodisias. According to these, a particle, instead of travelling from one spatial cell to another, ceases to exist in one cell to come into being in the next. The connection between the discrete concept and that of the idea of particles coming into existence and being annihilated persisted throughout the period of the evolution of these concepts. Indeed, motion inside a minimum cell would mean that during the first half of the time interval a particle must be in the first part of the spatial cell, and during the second half of the time interval it must be in the second half of the cell. In other words, the minimum time interval and the minimum spatial cell would be divisible into halves, a notion which is contrary to the definition of minimum, i.e., indivisible, intervals and cells. However, the purely logical insight of natural philosophy into the connection between discrete space and time, on the one hand, and the annihilation and resurgence of a particle on the other, could have developed into a physical concept only after the concepts of interaction and ultra-relativistic transmutational effects were incorporated

in science.

In the early 1930s, the concept of discrete space and time assumed a new shape under the impact of quantum mechanics. From the idea of quantum uncertainty sprung

218

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

219

several concepts which ruled out the possibility of a particle being specifically localised within very small time and space units. Later, several of these concepts were, in a measure, made complete by the Heisenberg S-matrix which has played an important part in the theory of fundamental particles. The S-matrix is an operator which permits to describe the state of a system of particles after their dispersion provided its state prior to dispersion is known. However, the terms prior and after should be taken to mean "long before" and "long after" compared with the duration of the act of dispersion proper, i.e., the time of maximum proximity of particles and of the change of their motions. As to the time interval and spatial region in which the dispersion occurs, according to Heisenberg, no definite spatial or temporal localisation can be ascribed to a particle in that interval or region. Heisenberg introduces a minimum spatial length and a minimum time interval to define a minimum four-dimensional cell, any time or spatial localisation within such cell being a nonentity. Accordingly, any position or time, any time and space coordinates which add more precision to a minimum cell, are, in physical terms, non-entities. One of the proofs of that is the fact that infinite energy and charge values result from any attempt to follow time and space events in areas of the order of minimum cells. It is these considerations that necessitate appeal to the notions of "long before" and "long after" the act of dispersion which occurs in a very small time and space region.

As the underlying basis of discrete space, quantum uncertainty was a concept typical of the 1930s and a later period.

In the late 1940s Snyder* related discrete space to a kind of uncertainty correlation. The Heisenberg relation of uncertainty provides a connection between each coordinate and a corresponding component of the impulse. Assuming that the coordinate x is defined with an increasing amount of accuracy it will be seen that the component P x of the impulse tends to become progressively less de-

finite. Snyder relates the coordinates x, y and z in the same way: assuming that one of them is measured with an increasing amount of accuracy, tending to become a continuous factor, the other coordinates will accordingly lose in definiteness. Thus, volume cannot become constricted into a point, space as a whole is found to be discrete, i.e., consisting of indivisible volumes, and the position of a particle will always be uncertain.

In the late 1950s, Coish developed a different conception of discrete space. According to Coish, discrete space was defined not only exclusively as a three-dimensional distance, i.e., the geometrical sum of coordinate lengths, or a three-dimensional volume as was the case with Snyder, but as any coordinate or any distance even if measured along one of the coordinates only. In that case, relativistic causality would be meaningless in an ultramicroscopic world. Relativistic causality requires that a signal, i.e., every process connecting two events into a cause and effect relation, travel at a speed below, or equal to, the velocity of light. However, where distance cannot be defined as a function of the coordinates of two points the concept of speed, i.e., a finite ratio between an increment in space and an increment in time, becomes meaningless, too.

This point requires some clarification.

The distance between any two points is determinted by squaring the comparable coordinates of these points, adding them and extracting a square root from the sum. This procedure of finding a measure is typical of Euclidean space. Where the space in question becomes nonEuclidean, i.e., curved, the above is replaced by a different formula. For instance, on a spherical surface, i.e., in a two-dimensional space, the distance will not equal the square root from the sum of the squares of differences between the coordinates, and a different formula will apply accordingly.

Any measurement, however, is meaningful only with continuous space. Where space consists of points, the distances between which cannot be defined by any number because these distances will not be divisible, the notion of coordinates, i.e., distances from a point to the

H. Snyder, The Physical Review, Vol. 71, No. 1, 1947, p. 38.

220

PHILOSOPHY OF OPTIMISM

PART TWO. SCIKNCE IN THR YF.AR ioort

221

coordinate axes, or of measurement generally, will not be applicable.

What, in that case, is the meaning of the phrases: "A minimum length of the order of 10~^^13^^ cm" or "A minimum time interval of the order of 10~^^24^^ sec."? What is the meaning of a minimum distance in a space with reference to which the term ``distance'' is meaningless? On the other hand, it would be very difficult to conceive of discrete space unless reference is made to a minimum space unit.

This is a case where two concepts which are in mutual conflict and rule out one another, yet cannot meaningfully exist without one another, comprise a complementary pair. The concept of discrete space is meaningful provided it can be converted into continuous space; this concept is physically meaningful only because it is complemented by the concept of continuous space.

Taking a leaf from the following pages, it may be said that the advance from discrete to continuous space will be one of the keys to the science of the next few decades. It might be suggested that ultramicroscopic discrete space is the scene of events from which relativistic causality will be evolved. It will be remembered that a world line remains a geometrical, rather than a physical concept as long as it is not filled with events which are nonreducible to being in each of the world points and to transition to the next world point. Ultramicroscopic (and ultrarelativistic) events may well provide the filler for the framework of world lines which will make this framework into an actual physical world.

This ``may'' and others like it are an essential, although indefinite, component of our prognostication. On the other hand, they are typical of the style of physical thinking in the theory of fundamental particles, i.e., in the most fundamental research---a modern quest for rerum natura. These quests stern partly from reminiscences of similar quests undertaken from ancient times to these days. These reminiscences are essential if we are to understand what is happening in modern science and what is its trend in the foreseeable future. The answers to the old questions, however, will derive from new facts which came to light

quite recently. This is especially true of experimental discoveries and theoretical generalisations in the field of strong interactions and of transmutations of fundamental particles.

In our time, a scientific prognostication is comparable to a tangent to a curve described to establish the latter's direction in a particular point. Such a prognostication does not claim to be a prophecy: in the next moment the curve will have changed its direction so that it will not coincide with any of the tangents drawn up at the present moment. Still, no statement of modern trends in science would be possible without prognosis, no discussion can proceed without reference to tangents, each of which claims to lend itself to only one particular interpretation.

We shall now try to draw such a tangent taking, as the point of departure, the idea of particle regeneration conceived in 1949 by Y. I. Frenkel.::" He suggested that a particle was transmuted into a particle of a different class, the latter being again transmuted anew into a particle of the initial class. Y. I. Frenkel called this double transmutation "particle regeneration". In the 1950s and later, this hypothesis was related to discrete space and time. Let us assume that a particle is regenerated in the next space and time cell, i.e., after a minimum time interval (on the order of 10^^^24^^ sec.) and at a distance equal to the fundamental length, i.e., the distance travelled by light in 10~^^24^^ sec., i.e., about 10~^^13^^ cm. By identifying the regenerated particle with the initial one we arrive at a particle identical with itself which has travelled a distance on the order of 10~^^13^^ cm at the velocity of light. What we arrive at is a discrete time and space within the light cone, where motion takes the form of discrete shifts generally variously directed and forming a broken spatial trajectory. The space and time within the light cone, i.e., where particles travel at various velocities below that of light, is continuous. It includes averaged-out microscopic world lines with corresponding continuous spatial trajectories. It will be readily seen that with complete spatial symmetry of

* Y. I. Frenkel, Doklady AN SSSR, No. 64, 1949, p. 1307; Uspekhi fizicheskikh nauk, No. 42, 1950, Series 1, p. 69.

222

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

223

fundamental shifts (or regenerations), after a large number of shifts the particle will return to the vicinity of the initial point, and the macroscopic shift will be equal to zero. On the other hand, where the space is characterised by a certain amount of dissymmetry in the probabilities of fundamental shifts, the macroscopic trajectory of a particle and its macroscopic velocity will have finite values.

The trend exemplified by this arbitrary scheme of transition from the ultrarelativistic world of transmutations to the relativistic world of continuous motion may seem to point to some sort of transmutational concept of the world, to making transmutation the point of departure for the conception of the world. That would be a mistaken notion, however. It was mentioned earlier that the concept of transmutation would be meaningless without reference to the macroscopic concept of a wprld line, whose shape and length can be used to identify the class of a particular particle. The new conception of the world for which the science of the next few decades is headed may prove much more paradoxical than any conception deriving from "bricks of the Universe" of any kind, whether they are motions of bodies, changes in field structure, or some other, more complex, events. This new conception will derive from the principle of physical being which requires that the ultramicroscopic and macroscopic events be complementary.

The reader will remember the self-coordinated system of particles whose interaction is a guarantee of the existence of each particular particle. It is probable that science will, in the course of its future progress, confirm this scheme insofar as strong interactions are concerned. On the other hand, a physical theory may emerge which will relate the material properties of a particle with the effect exerted thereon by all the particles in the Metagalaxy. The following arbitrary scheme will illustrate this possible development. A dissymmetry of the probabilities of fundamental shifts results in a non-zero macroscopic velocity of a particle. This dissymmetry could be identified with the impulse of a particle, making it responsible for the local field related to the non-homo-

geneous distribution of matter in the space around us. However, what is the field which is responsible for the symmetry, for the statistical spread of fundamental shifts which keeps the macroscopic velocity of a particle below that of light? It is this statistical spread that characterises moving particles having a rest mass. It would be logical to identify rest mass with the symmetry of probable fundamental shifts and to hold a homogeneous Metagalaxy responsible for it. It follows from the homogeneity of the Metagalaxy, in which even such non-homogeneous quantities as individual galaxies and groups of galaxies are negligible, that a particle is balanced in every direction by the homogeneous Universe, which is precisely the explanation for the symmetry of probable fundamental shifts.

To us, this discussion of hypothetical constructions is not a deviation from the subject of prognostication, not an attempt to substitute an outline of the structure of the Universe for a discussion of the current trend in science. The above constructions, as has been repeatedly indicated earlier, are arbitrary illustrations of the actual trend toward the unified concepts of the cosmos and microcosm. The long period when microscopic "bricks of the Universe" were held to be the ultimate link in the analysis of verum natura is over: today, both the behaviour and the existence of fundamental particles are related to the selfcoordinated cosmic system embracing the entire Metagalaxy.

Hence some of the special characteristics of the style and pace of the modern advancement of science. Science itself is becoming a self-coordinated system in which concepts in any one field acquire meaning only subject to corresponding concepts existing in other fields. The time is not long gone when the dynamics of particles of a particular class could be dealt with without raising any problems, contradictions, or difficult points that could only be dealt with by reference to other concepts---the dynamics of other particles. Where we deal with a unified theory of particles, with high energies or transmutations which either restrict or affect the dynamics of particular particles identical to themselves and belonging to the same

224

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

225

class, it is no longer possible to maintain any partitions between investigations into various classes of particles, with a corresponding change in the mutual relations of investigations into interactions. There was a time when interactions were treated under diverse headings: gravitation used to come into play in space studies, electromagnetic interactions were used to account for all sorts of phenomena in the wide range from geophysics to nuclear physics, strong interactions were involved in nuclear studies. All that has changed today: gravitational collapse accounts not only for the destinies of stars, but also for microscopic events, e.g., the transmutation of Markov's maximons into fundamental particles. The result is a fusion of particular discoveries and the general conception of the Universe hitherto unprecedented. A particular discovery proves, on occasion, to be so paradoxical^^1^^ as to require a review of the overall conception. This sort of thing happened before, yet the relationship was one of "weak interaction": for instance, a whole quarter century passed between the experiments that showed the fallacy of the ether wind concept and the emergence of the relativity theory. The progress in fundamental concepts is gradually losing its discrete quality: the intervals between generalisations sometimes approach those between the appearance of two consecutive issues of major physical journals. The advance of fundamental knowledge is becoming virtually continuous. Clearly, these frequent fundamental generalisations are far from having the same value, they are not unique or substantiated by experimentum crucis, nor do they often pinpoint the nature of such a critical experiment. However, the advent of new experimental techniques will render this natural and philosophical style precise enough without precluding new fundamental generalisations. Of course, the phrase "natural and philosophical style" may be unjustified: the fundamental concepts which come into being today are actually a way to bring precision to questions to be addressed to Nature by experimental techniques that will be the subject of the next two essays.

The unbroken stream of revolutionary generalisations is working a change in the dynamism of civilisation. In terms

of the Einsteinian inner perfection of physical theories the fundamental principles of science provide the teleological ideal for such theories: if the latter are not to be artificial or conceived on an ad hoc basis they must foillow naturally from fundamental principles.

V. Weiskopf distinguishes two trends in 20th century scientific development: ``intensive'' and ``extensive''. The former is represented by the search for fundamental principles. The main landmarks in the history of intensive studies have been electrodynamics, relativity, the quantum theory of the atom, nuclear physics and, finally, subnuclear physics. With time, each intensive trend produces numerous offshoots in the form of extensive trends, a term used by V. Weiskopf to designate an account for an event in terms of an established fundamental principle. He goes on to say that the most obviously extensive study involves some intensive elements.

In our day as never before, science is going through a period of upsurge of intensive studies leading to a unified conception of the world, an upsurge unprecedented in its scale and involving scientists representing every sector of the world research community.

Modern science has succeeded in establishing transitions between what formerly appeared to be unrelated fields, and the application of mathematics has led to an unmatched uniformity of methods. On the other hand, modern science is faced in the most fundamental area, the theory of fundamental particles, with a swelling number of facts and fields whose connection at this time appears highly problematic. Yet, as never before, science is seeking unity, for modern science is a Martha with many problems who wants to become a Mary with a single desire. Probably, however, the ``many'' and the ``single'' of today are related in an altogether different manner than before.

It may be suggested that the prognostication for the year 2000 must take account of the new relation between intensive and extensive studies, of the "strong interaction" between the two which was referred to above. When a new generation of particle accelerators is developed and made operative, when the opportunities offered by extraatmospheric and extra-terrestrial astrophysical and astro-

15-01545

226

PHILOSOPHY OK OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

227

nomical studies have been put to work, we are going to have a situation in which a novel fundamental concept will take a very short time to produce extensive studies. The latter will, in turn, lead to new fundamental problems, and the new experimental techniques will telescope the time before experimentum crucis to confirm a particular solution to such problems.

Extensive studies are those leading to the discovery of new ideal cycles. Such newly discovered cycles serve the end purposes of technological progress, while their virtually unbroken evolution results in a virtually unbroken acceleration of progress. A change in the fundamental principles, i.e., the teleological ideals of extensive studies, produces an increasing rate of acceleration of technological progress.

It would be wrong to think that the increase in the rate of acceleration will be constant. The stages in the development of fundamental principles---an intensive trend---will not follow at intervals typical of the early half of the 20th century (relativity theory, quantum mechanics, nuclear theory, subnuclear problems); the intervals will, probably, be shorter, yet a certain cyclic quality will persist. Upheavals on the scale of the relativity theory will not be an annual occurrence. Such upheavals result in a continuous increase in the rate of acceleration of progress because every fundamental upheaval has the effect of speeding up extensive studies over a certain period of time.

For this reason, it would also be wrong to think that the "strong interaction" of fundamental and extensive studies implies that they merge or become indistinguishable. Investigations which produce changes in the basic principles, in the ideals and the style of science, which produce a basically unidentifiable and generally distant effect will, to a degree, be a class apart. In this connection, we shall make a remark on the type of scientist at a time when the self-coordinated system of intensive and extensive studies is aimed at the cognition of the self-coordinated system of the cosmos and microcosm. The scientific world sometimes mourns the passing of the "ivory tower" scientist who used to operate in the abstract realms of thought, too high above to be bothered by the babel of extensive

science. We would suggest that this type of scientist is not doomed to disappearance: nay, he will become somewhat more common than today.

Clearly, the "ivory tower" man of the late 20th century will be different from his predecessor of the early half of the century: both are "ivory tower" scientists in a very special sense. What I have in mind is the possibility of making an important contribution to science in its forward advance under conditions of "weak interaction" with a large snowballing mass of particular problems and findings. This is not in conflict with the explorer's personal interest in a particular set of narrow problems, nor with the basic need for experimental verification of a theory ("external justification"). Einstein was interested in dozens of particular scientific problems, e.g., the cause of the erosion of the right banks of south-bound rivers, and in dozens of technical inventions---and not only as an employee of the Berne Patent Office. And yet, he was an "ivory tower" thinker in the sense that the point of departure for his relativity theory was but a very limited set of experiments. Incidentally, Einstein used to say that the electron---and just that---would be enough to evolve the laws of the microcosm. It should be noted, by way of parenthesis, that this was true in 1924-1927, a period when wave and quantum mechanics were emerging on the scene.

Today, the situation has changed. The present task is to bring together the sum total of what is known about the various classes of particles---a problem which cannot be dealt with by reference to the laws of the behaviour, birth and decay of particles of a single class. Yet every paradoxical property inherent in one or several classes of particles leads to fresh thinking on the nature of space and time, their symmetry, their discrete quality or continuity, on logical and mathematical concepts, on the distinction between physical being and geometrical concepts. The same is true of the effect of astronomical and astrophysical discoveries. It might be suggested that "weak interaction" with extensive science will, apparently, continue, with a consequent lease on life for the "ivory tower" scientist.

15*

228

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

229

It was mentioned earlier that "ivory tower" men may become more common. The cycles of extensive studies produced by the great upheaval in our knowledge of fundamental principles will work a sympathetic change in the dominant interests in science. An apt illustration is provided by the cycle produced by Bohr's atomic model: its advent triggered off a series of extensive studies giving accounts of atomic spectra, valencies, periodicity and its disturbances in the periodic table of elements, -and of a multitude of other laws. The cycle is unfolding to this day, with fresh superimposition of later cycles which led to the creation of nuclear physics. Generally, the mutual superimposition of extensive cycles contributes to the virtual continuity of the impact of science on the advance of civilisation. However, in the early 1920s along with the continued application of Bohr's model, there arose and came to be progressively felt a need for new principles which were ultimately discovered in 1924-1926 by thinkers who were sometimes linked by "weak interaction" with the applications of Bohr's model, in which sense they could be described as "ivory tower" scientists.

There is a certain psychological problem related to the problem of weak and strong interactions of fundamental principles and particular investigations and to the "ivory tower scientist" problem, which can best be illustrated by the case of Einstein.

Anyone who has studied the relativity theory, who has fought his way through the thickets of mathematical and physical constructions, of real and imaginary experiments, of infinite empirical evidence, applications and particular problems, will have been awed by his unforgettable encounter with that basic, yet the most involved, puzzle of a point of matter moving in the surrounding empty space. This is not a case of diverse fundamental particles, of a multi-tiered hierarchy of atoms, molecules, microscopic bodies, planets, stars and galaxies, of varied fields---- gravitational, electromagnetic, nuclear, etc.: there is nothing in that picture but space and a particle that has no other predicates except that it moves in that space. What is the meaning of that motion? What is the meaning of that concept in the absence of other bodies? What ,is the meaning

of the being of that particle? This question---for these are actually but one question---is more difficult and involved than the many particular questions belonging to specific complex systems held together by complex and varied interactions. This basic question appears to be isolated from all particular questions: accordingly, any thinking on the nature of being and motion will, probably, require detachment from the sea of particular studies. A thinker must remain tete-d-tete with the most general puzzles and contradictions of being: he must be an "ivory tower" man.

This attitude is clearly typical of Einstein's psychology. His remark that a lighthouse attendant is the most suitable occupation for a scientist, his constant tendency toward solitary thinking, his introversion, commented on by Infeld and others, were a measure not only of his personal style but also an aspect of the style of 20th century science in general. Yet, this is an aspect and no more than that, an aspect which could not be there without its counterpart---a profound and active penetration into the multi-varied and, at first glance, ``pointilistic'', picture of particular studies. This ambivalence of, and contradiction in the nature of scientific thinking stems from the undoubted difference between, and the undoubted connection of, the "inner perfection" and the "external justification" of scientific theory. If we are to rise to the level of the most general principles from which naturally stem new paradoxical results, we must take this all-embracing view of Nature in its totality, which opens up the substratum of the world independent of particular phenomena. Under this heading comes Einstein's, and later Minkowsky's, insistence on the non-entity of physical equivalents of threedimensional space, and on the physical meaningfulness of four-dimensional time and space. Yet, as scientific thinking approaches the more general problems disengaging itself, as it were, from particular and specific problems, it cannot become fully divorced from the latter: the general concepts are subject to modification by "external justification", by empirical verification, by reference to paradoxical facts. The thinking on space and time led to a new fusion of classical concepts, which would have been im-

230

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

231

possible without new optical and electrodynamic observations; paradoxical facts were naturally accounted for within the framework of a paradoxical theory, and the entire genesis and development of non-classical science confirmed the thesis of the need for an empirical varied verification of rationalisations by the senses.

Hence Einstein's contradictory position: he sought solitude, yet responded to a multitude of varied experimental and theoretical findings, even to events that bore no direct relationship to science, to become, eventually, the physicist destined to stand the closest to an unprecedented number of men. Nor is that a personal characteristic, at least not exclusively personal, for modern science evokes in its representatives, and in men and women of all walks of life two related psychological attitudes. A modern scientist seeks to rise above the multiplicity of facts to remain tete-d-tete with the most general problems of being, yet at the same time, as never before, he keeps a watchful eye on everything that is going on in science both in his own and other fields which are potential sources of new facts and logical constructions. It may be suggested that these psychological characteristics of the scientist of our age will grow increasingly more interrelated as science approaches its modern ideal---a unified conception of rerum natura embracing the whole of the empirical basis of non-classical science, astounding in its vastness and complexity.

The fundamental principles of the relativity theory, of quantum mechanics and of relativistic quantum mechanics have brought in their wake a vast number of extensive studies and discoveries which combined with these principles to produce atomic and nuclear physics. At the same time, an increasing number of aporiae, contradictions and difficulties induce science to look for fresh principles. This development will probably gain in intensity in the next several decades. Thinkers who spend their time and effort in solving the problems of quarks, of the discrete nature of space, of the difference between the material properties of a particle and its world line, of a finite versus infinite Universe, are "ivory tower" men in a sense that has nothing to do with intellectual seclusion. For these problems

come under the heading of questions that will be addressed to Nature, that may, in part, prove to be meaningless---yet, in one way or another they will be answered by large-scale collective experimentation, theoretical studies and mathematics.

HIGH-ENERGY PHYSICS

Much attention was given in the early essays of this book to an overall prognostication covering a set of related shifts in power generation, manufacturing, process control, and in the quality of work. These shifts, covered by the rather arbitrary term "atomic age", are immediately related to what Weiskopf called extensive studies, which have led to the development of atomic and nuclear physics, and to the induced intensive advancement of science towards relativistic and quantum principles. However, as every period of civilisation, the atomic age must include such trends in scientific thinking as will lay the groundwork for the next, and more dynamic, period.

In the atomic age, this groundwork involves an intensive scientific trend---a search for new fundamental concepts which are destined to produce subnuclear physics.

Subnuclear physics can be defined as the physics of particles either comprised in atomic nuclei or not, but in any case occupying the same rungs of the hierarchic ladder of discrete particles of matter as nucleons. These are particles which, possibly unlike the larger discrete bodies, e.g. molecules, atoms and atomic nuclei, are indivisible into lower-echelon hierarchical links which comprise them. If that is so, their death and rebirth is not reducible to a spatial rearrangement of sub-particles identical to themselves: we do not yet know what they are reducible to. Neither do we know the areas of localisation of the contacts, dispersions and transmutations of particles, nor whether any specific meaning can be attached to a precise localisation of these events. The only fact that can be stated with certainty is that these events occur within very limited time and space units, of the order of the linear dimensions of an atomic nucleus and that they last as

232

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

233

long as it takes light to travel such distances. That may well be the time and space threshold at which we shall find ourselves in a world where the course of ultramicroscopic events is governed by principles of greater generality and precision than anything we know of today. It might be that this threshold is still farther on and the ultramicroscopic world which holds the key to new solutions to the fundamental problems of being is to be found in time and space units that are many orders smaller. In any case, new fundamental principles and their "external confirmation" can be reached through experimental study of very small regions---the scene of what we feel are ultra-relativistic processes.

To achieve this will require particles of very high energy to be placed at the disposal of the experimenter.

The advance of physics from the atomic-molecular theories of the 19th century to atomic physics and, progressively, to nuclear and subnuclear physics represents an advance from energies of the order of hundredths of an electronvolt to energies in the range of electronvolts, then millions and billions of electronvolts. In thermal movement, the atoms of classical physics and chemistry exchange energies of the order of 0.01 eV and behave like spherical solids. The electromagnetic radiation of these atoms, which gives a measure of their structure, possesses energies in the range of from several eV in the optical spectrum to several hundred eV in X-rays. The nuclear structure is revealed in processes requiring millions of electronvolts. Since the early 1930s, there has come into being, first in nuclear and later in subnuclear physics, a sort of working liaison between high-energy particle accelerators (these energies have grown from hundreds of thousands of electronvolts in the 1930s to milliards eV today) and cosmic ray research instruments. Cosmic rays are fluxes of particles of various classes and of varied energy levels attacking the Earth from every direction in outer space. Cosmic ray particles sometimes possess enormous energies that are impossible to achieve in accelerators, yet they are harder to manipulate: in most cases, new particles and new processes were initially discovered in cosmic rays and subsequently studied in detail through

the use of accelerators. In the 1950s and 1960s, however, accelerators were instrumental in identifying several new particles and processes. The energy levels of cosmic ray particles used in the new discoveries have grown continuously and, since the 1930s, increased in about the same proportion as particle energies in accelerators. The beginning of that period was marked by the discovery in cosmic rays of the positron whose existence had been predicted by relativistic quantum mechanics. The discovery and study of the positron did not call for very high energy levels since the positron mass, equal to the electron mass, is small: energies on the order of a million eV are required to overcome the rest energy of the electron corresponding to that mass; the birth and decay of nucleons and other, heavier particles, however, involves energy levels of the order of milliards eV. The energy levels involved in present-day intensive cosmic ray studies are on the order of trillions of electronvolts, with particles being artificially accelerated to 76 milliard eV. The most powerful proton accelerator was recently completed in Serpukhov. Other major accelerators have been built in Brookhaven (33 milliard eV), Geneva (28 milliard eV), Dubna (10 milliard eV). A brief historical note is required to give the reader an idea of their design and construction.

Linear accelerators built in the early 1930s involved particles which acquired increasingly higher energies as they moved along rectilinear paths in an electric field. Cyclical accelerators, cyclotrons, came on the scene almost at the same time. In a cyclotron, a charged particle moves along a circumferential path in a magnetic field disposed at normal angle to the plane of its circulation. The particle periodically finds itself in sections occupied by an electric field, wherein its velocity, and consequently energy are boosted by the latter. The particle moves, as it were, along a spiral or, more precisely, along circumferential paths of an increasingly greater radius: as the result, despite its growing velocity, it passes the accelerating sections at equal intervals of time. This procedure was used to impart to particles energies on the order of ten million eV.

234

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

235

At high-energy levels, however, the correlations of the relativity theory---the dependence of mass on energy--- come into play. As the mass grows the synchronous passage of the particle through the accelerating intervals is disturbed and so is the maximum of the high-frequency electric field in those intervals. V. I. Vexler, in 1944, and McMillan, in 1945, suggested that the relativistic increase in particle mass be offset by a sympathetic increase in the magnetic field or by reducing the electric field frequency in the accelerating sections. The accelerators of the 1940s achieved very high relativistic particle mass values. Thus, in betatrons, electron accelerators first developed in 1940, the electron velocity at an energy level of 2 million eV was 98 per cent of light velocity and the mass far exceeded the rest mass. Even such heavy particles as protons achieved significant mass increases at higher energy levels used in the 1940s and later. The ability lo offset the relativistic effect led to the development of very powerful accelerators which capitalised on it. These included synchrotrons and other cyclic accelerators which imparted to protons energies of milliards of electronvolts. Accelerators on the one hand and cosmic ray observations on the other were instrumental in the discovery of many new particles and processes involved in their decay and birth.

Any hopes for a theory of fundamental particles which would account for this variety in terms of a single and apparently entirely new principle are hinged on accelerators imparting to particles still greater energies. What we have in mind are fundamental questions which, it is to be hoped, will be answered with the aid of experiments involving particles possessing energies on the order of 200 to 1,000 milliard eV. The prognostication for the next several decades consists precisely in an enumeration of these questions and the hypothetical estimate of the effect of their solution. This prognostication lacks the central link---a prevision of the solution itself, of the answers of Nature to the questions addressed to it. Any surmises as to the effect of these solutions are just as uncertain, probably even more so. Bruno Pontecorvo has aptly remarked that the fundamental nature of the nhvsics of

elementary particles results in the unexpectedness of discoveries in that field. "Accordingly, any question as to the practical economic application of a particular highenergy accelerator is almost illegitimate,""" he says. Indeed, with extensive studies discoveries may, in part, be predicted: they follow known principles and ideals of science, and their objective is to find an explanation for phenomena in the light of such known principles. What happens, however, where the aim is precisely the establishment of a novel principle?

It may appear from the above that any search for evidence in support of the plan to build new accelerators of greater power is, in principle, an impossible task. Evidence here is not to be understood as a mere reference to the generality and depth of the questions addressed to Nature by high-energy physics: this generality and depth, for all the uncertainty of potential answers, must be shown to characterise the place occupied by high-energy physics among the motive forces of civilisation. It may be suggested that this place can be identified by a generalisation of the indices of civilisation, the conversion of such indices into dynamic factors, and by inclusion in them of temporal derivatives of various orders.

It is precisely from this viewpoint that the reader is invited to look at the questions to be answered by highenergy physics, either confirming or rejecting their very meaning.

The first question concerns the boundaries of the temporal and spatial, generally relativistic account of Nature. This question of the boundaries of relativistic causality probably coincides with that of discrete space and time. This discrete quality makes physical sense in terms of the "principle of being", i.e. it is not reducible to the geometrical problem of a discrete abstract space if some events which are irreducible to the motion of particles actually occur in some minimal cells. Accordingly, the first question naturally merges with the second: what are the ultra-rclativistic events which provide the boundary of relativistic causality in very small regions and how

Uspekhi fizicheskikh nauk, No. 86, 1965, Series 4, p. 729.

PART TWO. SCIENCE IN THE YEAR 2000

237 236

PHILOSOPHY OF OPTIMISM

are they related to relativistic processes in large regions? The query boils down to this: What makes some particles behave in one way and others in a different way? Why do some particles---proton, neutron and electron--- form the matter around us while others, including the antiparticles, are almost never to be found on the Earth under natural conditions? Why do particles possess their particular, rather than some other, masses and charges? In short, why is the microcosm (and the space, too) made the way it is rather than as something else? These questions bear resemblance to Kepler's question: Why do the planets revolve around the Sun at their particular rather than at some other distances? It bears even greater resemblance to this question: Why do the elements have their particular rather than some other valencies and atomic weights? This last question has been answered by quantum mechanics which explained the reasons why the electrons are disposed in a specific number in each of the orbits, why the periodicity of elements is irregular, etc. The theory of elementary particles has made but its first steps toward constructing a system similar to that developed by Mendeleyev. The first success has been scored, too: a number of particles were predicted on the basis of the conjectural systematisation of strongly interacting particles, which were later proved to exist experimentally.* Yet a system of elementary particles, whose definitiveness would be on a par with the physically deciphered Mendeleyev Periodic Table, will require an advance beyond a new energy threshold.

We wish to apologise to the reader for this strictly physical discussion in a book on the philosophy of optimism. But that is the way it is: modern optimism and modern philosophy stem---and that is their great advantage---from very specific, yet at the same time very general trends in science, technology and economics. In the introduction to Capital, Marx wrote that science knows no royal roads and that its pinnacle can only be reached

by climbing rocky footpaths. And that includes the pinnacles from which the future of civilisation is discernible.

On the economic side, giant accelerators require sizable investment comparable to industrial complexes, and their construction has a certain effect on the entire capital investment structure. That is a very important turn in the history of civilisation. From Archimedes to these days, mankind has invested in science less than the value of ten days of its modern economic output. The atomic age is an age in which scientific, particularly physical research is comparable in cost to the components of the capital investment balance. It would be a mistake to think that investment in physical research will grow indefinitely at the same pace or with the same acceleration: that would create a situation where the area occupied by the physical research establishments would eventually exceed that of the Earth's surface, the number of scholars would exceed the Earth's population, while the physical journal referred to by Oppenheimer* would become heavier than the globe. Similar prospects result, as we have seen, from the extrapolation of many indices which have shown a steady rate of growth over the past few years or decades. However, the period of the 1970s to 1990s will still be marked by a very high rate of investment in basic research. Today, this investment has reached the same order of magnitude as investment in some major industries. This is precisely what makes physics an economic science, as it were: scientific prognostications which are now inseparable from physics must take the optimal economic investment structure into consideration. In turn, this imparts a certain physical aspect to economic theory and practice: economic prognostications and planning must take account of objective trends in physics.

The need for high-energy physics and accelerators capable of imparting energy levels of 200 to 1,000 milliard eV is not, apparently, open to doubt. The main arguments advanced over the past three or four years point to investment in high-energy physics as the essential condition

* M. Gell-Mann, A. Rosenfeld, G. Chew, Uspekhi fizicheskikh nauk, No. 83, 1964, Series 4, p. 69; W. Fowler, N. Sarnios, Uspekhi fizicheskikh nauk, No. 85, 1965, Series 3, p. 523.

K. Oppenheimer, The Flying Trapeze.

238

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

239

for the progress of civilisation. J. Schwinger* makes a very intriguing historical comparison in this connection. There was a school of thought in the late 19th century, which objected to the macroscopic properties of bodies being deduced from their atomistic structure which was merely hypothetical at the time (Schwinger probably has E. Mach and W. Ostwald in mind). Despite the doubts, scientists continued to expend material resources and intellectual effort in an attempt to discover experimental proof of the existence of atoms. The outcome was a new stage in science and the triumph of atomistics. Today, we are faced with new hypothetical contours of the microcosm: to reject their experimental verification would be tantamount to refusing to allow science to advance to its new stage.

R. Oppenheimer** advances a highly ``Einsteinian'' argument. According to him, the progress of science, its penetration into progressively smaller time and space regions is the basis of rationalistic philosophy, for without a further penetration into the region of the infinitely small our effort this time may not lead to the triumph of the human intellect.

Arguments like these show the need for high-energy physics. The purely physical considerations determined investment trends, the construction of accelerators of particular specifications. In a diagram of magnetic field intensity, of the length of vacuum chamber circumference, the increment in particle energy per revolution, etc., the physical considerations could determine the optimal combination of parameters,, the optimal direction of the vector in the space defined by the above variables. However, what determines the scale of investment and the rate of deployment of the experimental capability of high-energy physics? What are the arguments in support of the pace at which 200 to 1,000 milliard eV accelerators are to be put in operation? Neither general arguments nor physical considerations provide an answer to these questions. Also,

it is perfectly clear that a rate of investment in highenergy physics that would slow down the pace of the industrial and cultural advancement essential to a new stage of scientific development would not be rational. In the final analysis, these conditions are taken to include the entire gamut of capital investment and investment in high-energy physics is regarded as a component in an optimal economic balance.

Things would be very simple if the quantitative economic effect of basic research could be established and the share of investment in high-energy physics in an optimal investment balance found with a view to achieving a maximum effect. That, however, is impossible. When it is said that basic research contributes to a faster pace of acceleration of labour productivity the actual value of that acceleration remains a symbol that is not yet decipherable in quantitative terms.

Yet the pace of the advancement of high-energy physics depends on the question "What for?" In the sense discussed above, that of the maximum growth of the dynamic indices of civilisation, this question has a bearing on the construction of accelerators: accordingly, an economic effect may be claimed for basic research. The concept of economic effect, as the reader has seen, is subject to transformation and generalisation: it becomes dynamic as it comes to include the factors of speed and the acceleration of the growth of productive forces. Economic effect, however, is not restricted to these quantifiable factors (quantifiable within certain limits and subject to the uncertainty of the economic variables referred to immediately above). It was noted earlier that the increase in the acceleration of the productivity of growth (achievement of higher acceleration factors) is a variable that is not quantifiable---that is, not yet. However, this qualitative inference may be made from the phenomenon of acceleration with whatever degree of certainty is possible in cases of this kind: intensive studies produce cycles of acceleration in extensive studies and in their results, i.e. cycles of accelerated renewal of those ideal physical schemes which serve as teleological ideals for technological progress.

* J. Schwinger. Uspekhi fizicheskikh nauk, No. 86, 1956, Series 4, p. 614.

** R. Oppenheimer, Uspekhi fizicheskikh nauk, No. 86, 1956, Series 4, p. 597.

240

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

241

composition of the Milky Way and some other hitherto unknown facts, and published his findings in Sidereus nuncius: this marked the beginning of a theoretical understanding and particularisation of the Copernican system, a picture of interacting bodies without the "natural motions" of peripatetic cosmology.

The second astronomical revolution which began in the mid-20th century is still with us and there is every reason to believe that it will extend into the end of this century and, possibly, the beginning of the next: it will, probably, confirm and render a concrete character to relativistic cosmology, a picture of curved space whose radius is subject to variation in time. Further, this revolution in astronomy will permit to see that space as the scene of interplay of interacting fields, of the existence and behaviour of the quanta making up these fields, and of elementary particles.

This second revolution in astronomy has for its most general point of departure the relativistic model of the Metagalaxy and seeks to relate it to gravity and also with various kinds of fields. As to the means of observation, the point of departure for the new revolution in astronomy consists, first, in observation from Earth orbiters and spaceships, and secondly, in the substitution of, or rather supplementing, the human eye with astrophysical instruments receiving electromagnetic waves in the optical and other ranges, as well as the fluxes of various particles other than those of electromagnetic radiation.

Earth orbiters and spaceships have, to date, permitted observation of the radiation of astronomical objects free from the interference of the Earth's atmospheric envelope. A highly plausible prognostication for the year 2000 promises observation from the lunar surface and from either the surface or the near orbits of the planets Mercury, Venus and Mars. The capability to deliver astronomical and astrophysical instruments to the terrestrial planets is the starting point for the prognostication for the development of astronomy in the year 2000. Nuclear fuel is not yet used in space rocketry, but it is a factor in the prognostication for the year 2000. In that sense, astronautics, which today features an independent power basis (for which

16-01545

SPACE

One of the most brilliant expositions of the relativity theory, A. A. Friedman's The World As Space and Time written in 1923 has this epigraph drawn from the satirical parody The Historic Pronouncements of Fedot Kuzmich Prutkov: "Once when the celestial dome was covered by the starry cloth of night, the French philosopher Descartes sat by the steps of his stairs absorbed in, observation of the dark horizon. All of a sudden a passer-by approached him with this question: 'Tell me, oh wise man, how many stars are there in heavens?' 'You fool,' replied the latter, 'no one can know the unknowable!' These words uttered with searing passion had their desired effect on the questioner.''

In the lines immediately following that epigraph A. A. Friedman says that "the thinking portion of mankind has at all times produced curious passers-by and wise men, somewhat more courteous than Descartes, who sought to reconstruct the picture of the world from an invariably negligent amount of data"/^^1^^" In the remaining decades of this century, courteous wise men will be in a position to give answers to passers-by curious about the Universe from something more than negligent data. Of course, the question today is not one of merely counting discrete bodies in the Universe. The cosmological problem is no longer, and probably not so much a problem of interacting discrete bodies as one of variously constituted fields. The most telling distinction, however, consists in that at this stage no adequately rapid advancement would be possible in those areas of basic research which produce the characteristic dynamism of our age without some sort of theoretical construction embracing the entire Metagalaxy.

The classical picture of interacting bodies---stars, planets and comets---was the fruit of the first astronomical revolution triggered off by the telescope. In 1610 Galileo directed his telescope at the sky, discovered the discrete

* A. A. Friedman, The World As Space and Time, Moscow, 1965, p. 5 (in Russian).

242

PHILOSOPHY OF OPTIMISM

PART TWO. SC115NCE IN THE YEAR 2000

243

reason the word ``space'' is legitimate in the oft-repeated phrase "atomic and space age"), will subsequently be subject to the resonance effect of nuclear power and depend on its progress.

There is a second and a much more subtle relation between the outlooks for astronomy and for nuclear physics. The relation I have in mind involves nuclear reactions used to explain the findings of astrophysical observations.

In modern astrophysics, almost every generalisation, fresh concept or major observation is not merely a hypothetical statement of the structure of the Universe, but also, as a result of the foregoing, a hypothetical proposition in respect of a development in astrophysics and astronomy which will either confirm and provide specific proof for the hypothesis involved, or reject it. Accordingly, an enumeration of problems and hypotheses in modern astrophysics and astronomy represents a sort of a schematic prognostication for the development of those sciences.

This prognostication follows from actual observations, actually advanced concepts and formulated problems. At the same time, any prognostication in this field inevitably presupposes new observations and revolutionary findings. This inevitability is the most authentic component of our prognostication, even though it is not specifically decipherable. For the fact is that we have just entered the epoch of extra-terrestrial observation and of investigations into non-optical spectra. New observations will inevitably raise new problems and alter the course of development of astrophysics and astronomy.

All this is yet another proof that a scientific prognostication is generally a tangent to the curve of reality, a tangent indicating the direction of the curve, which may change in the next moment. This feature in no way detracts from the significance of prognostications in theory or in practice: in modern science, more than ever before, a hypothesis is an essential condition for forecasting authentic positive knowledge. The present stage in the physics of elementary particles is one of thinking about questions to be addressed to Nature by the use of the new generation of accelerators. The present stage of astro-

physics requires thinking about questions to be addressed to Nature by means of telescopes and astrophysical receivers of radiation carried on orbiters, installed on the lunar surface and, later, on the planets of the terrestrial group. These questions of the theory of elementary particles and of astrophysics have many points of coincidence. Both, however, are formulated as physical and astrophysical hypotheses which are also prognostications of scientific development that allow of more than one interpretation. On the practical side, these hypotheses and prognostications contribute to the greater intellectual potential of science with a consequent acceleration of the progress of civilisation---a development which is unquantifiable yet indubitable.

An important factor for the greater intellectual potential of science is provided by an inevitable appeal to the general cosmological hypotheses in dealing with astronomical and astrophysical problems. Of the many cardinal problems in respect of the structure and evolution of the Universe in general, I propose to consider: (1) heterogeneity of the Universe, (2) finity and infinity of the Universe, (3) expansion of the Universe, (4) its state prior to expansion, (5) symmetry or dissymmetry of the Universe in terms of the equal or unequal proportion of particles and anti-particles.

A cursory look at the celestial dome will bring evidence of the non-uniform distribution of mass. Matter contained in the stars has a very different density than that of interstellar space. The stars are clustered in galaxies, where the average density is naturally greater than in intergalactic space. The Sun belongs to the Galaxy which comprises one hundred milliard stars. This is surrounded by space with no stars in it beyond which are other galaxies spaced at one to five million light years. As we go to larger units of space we find groups of tens or hundreds of galaxies, but we have not yet discovered any larger structural units. Accordingly, it may be supposed that the Universe, taken on the scale within the capability of the telescope, is homogeneous: as we go to increasingly larger scales we find the same density of matter whatever the point the stars are observed from. For the sphere with a

16*

244

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

245

radius of about three milliard light years, which contains hundreds of millions of galaxies, the average density of matter is close to 10~^^30^^ g per cu cm. The matter contained within these limits may be regarded as a sort of heterogeneous cosmic substrate regardless of any non-uniform occurrences all the way up to groups of galaxies. The distances between groups of galaxies are very small by comparison with the sphere embracing the part of the Universe that we know. It may be supposed that the Universe is homogeneous beyond that sphere, too, in areas which are not accessible to the telescope.

There is a sort of "optical horizon" beyond which we cannot yet see because the objects there, to paint a simplistic picture, recede from us at the velocity of light and the red displacement becomes infinite, making these objects invisible to us. However, over much smaller distances, the postulate of a homogeneous Universe is one that lends itself to confirmation by observation. The known Universe whose homogeneity is confirmed by observation depends for its size, within the limits indicated, on the power of our telescopes, on our ability to have them installed outside of the Earth's atmosphere, and on the ability to receive the entire range of electromagnetic waves and all cosmic radiations of all kinds, in other words, on progress in the new revolution in astronomy. The latter, as we shall immediately see, also determines answers to some other basic cosmological questions.

Under the latter heading comes the problem of finity or infinity of the Universe. At this point, we must go back to the brief remarks made on the relativity theory earlier and extend them somewhat. To Einstein, gravity is a change in metrics, an advance from the Euclidean to the non-Euclidean properties of time and space, a characteristic of curved time and space. As one moves through space, one comes across the local gravitation fields of the planets, stars, galaxies, i.e. instances of curved time and space which cause the world line of a body to become curved just as on the two-dimensional surface of the Earth bumps, mounds, hills and mountains bend the trajectory of a body moving over the Earth's surface. Apart from such local curves, there is the general curvature of the

planet's surface. By the same token, is the Universe, apart from local gravitation fields, characterised by a comparable general curve? If time and space as a whole were characterised by such a curve the motion of a body leaving a particular point for the cosmos at a particular moment in time would terminate in the same point at the same moment, its world line would be closed in the same way as one who travels around the world without changing his direction would arrive at the point of his initial departure. However, a closed line in space is not in conflict with physical axioms, whereas a closed world line and the arrival of a space traveller at the same point at the moment in time when he left it is a physical nonentity. Accordingly, Einstein came to the conclusion that time is not curved, whereas space is. A body moving freely in the Universe, despite any local fields affecting its direction of travel, will describe a closed line of a length depending on the space curve to arrive at the point of its initial departure. This, however, will occur after milliards of years: there is no return to the same moment in time. This structure of a four-dimensional world, where space dimensions are curved while time is not, reminds of the surface of the cylinder which is straight-line in one dimension (parallel to the axis) and curved in the other, transverse dimension. The world of Einstein is accordingly called a cylindrical world.

That is a closed model of the Universe: it has a finite volume and the trajectory of a freely moving body therein cannot be infinitely long. This finite Universe, however, is not the finite Universe of Aristotle: it has no limiting boundary. Nor is it the model of an island of stars in a boundless ocean of empty space. Although the ocean has no bounds, yet it is limited. A graphic illustration is provided by going from four-dimensional to two-dimensional surface: the surface of a sphere has no bounds, yet it is limited in area and it is impossible to draw a geodesic line of infinite length on it.

This idea of universal space, already familiar to us, is not the only one conceivable. In the above case, it is visualised as a spherical surface and the geometry of the world is the Riemann geometry: a line drawn through a

246

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

247

point outside of another line will necessarily cross the latter, the sum of the angles of a triangle is greater than the sum of two right angles, etc. The geometry of the world, however, may well be different. With a saddleshaped surface, one will see that the Lobachevskian geometry becomes applicable: any number of lines may be drawn through a point outside a given line, which will not cross the latter; the sum of the angles of a triangle is less than the sum of two right angles; lines perpendicular to the same third line diverge. If universal space is a three-dimensional analogue of the latter type of surface, it is unlimited. It will also be unlimited if space is not curved, if its two-dimensional analogue is a plane.

If the world is characterised by some sort of space curve, this curve may be constant or vary with time. The latter supposition was the basis of the frequently referred to model of an expanding Universe first advanced by Friedman in 1922. The reader will remember that the subsequently discovered red displacement of the spectra of distant stars confirmed Friedman's model. The nature of the expansion of the Universe remains an open question: we cannot say, at this stage, whether this expansion is an irreversible process or whether the Universe pulsates and the expansion cycle will at some future time be followed by a compression cycle. The answer to this question will be determined by data on the average density of matter in the Universe and, generally, on fresh astronomical and astrophysical observations. It may be suggested that the question will be conclusively answered in the last third of this century, i.e. by the year 2000.

The course of progress of the new revolution in astronomy may provide an answer to another cardinal question of cosmology and cosmogony---the state of the Universe at the time it started to expand.

According to the modern conception of the rate of expansion of the Universe, the latter was a super-dense body some seven to 14 milliard years ago. What was its temperature at that time? In 1946, G. Gamow advanced the model of a hot Universe, the idea of a very high initial temperature. When the expansion process arrived at the medium density---that of a nucleus---its temperature was

about 10^^13^^ degrees: before that, at greater density, it was even higher.

Data involved in the solution of particular questions related to the initial state of the Universe are closely associated with astrophysical observations and even with research or practical utility. In 1965, the Bell Laboratories conducting a study of radio interferences discovered a heat radiation which comes to the Earth at the same rate of intensity from all directions. From the nature of this ``residual'' radiation was deduced the presence of a definite temperature in intergalactic space, which was related to the initial temperature of the Universe as estimated in the "hot Universe" model. Much of the research associating this model with astrophysical data operates with transmutations of fundamental particles, particularly the annihilation of heavy particles and the conservation of neutrinos and some other particles.

The symmetry of the Universe is just as closely associated with the theory of elementary particles. The theory of elementary particles operates with matter---electrons, protons, neutrons, etc., and antimatter---positrons, antiprotons, antineutrons, etc. Is the Universe symmetrical in the sense that matter and antimatter are present in it in equal proportions? Some theories deduce from the latest hypotheses the concept of a dissymmetrical Universe, the nonexistence of macroscopic concentrations of antimatter. The celestial bodies and galaxies making up the Universe consist of matter. There are other theories which assume the existence of antistars and entire antigalaxies. These include cosmogonic concepts---they admit the possibility of an initial ambiplasma (from the Greek ``ambios''---both) which consisted of matter and antimatter. At some stage of the evolution of the Universe, matter and antimatter separated in powerful heterogeneous gravitational and magnetic fields without collision between particles and antiparticles. Subsequent evolution includes annihilatory processes that are considered in interpretations of powerful quasar radiation, among others.

A distinctive feature of modern cosmology is its integrated character: the problems of homogeneity, infinity, expansion, initial state and symmetry of the Universe can

248

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

249

only be solved within a unified framework. For instance, the admission of an ambiplasma separating without collision of particles has to be tied in with the high initial density of the Universe, a concept stemming from the expansion theory. There are many such relations, which on the whole rule out the possibility of separate approaches to fundamental astrophysical problems. Nor can these problems be solved unless a more general theory of fundamental particles is evolved.

In modern astrophysics, the Einsteinian standard of " inner perfection" is a prerequisite now necessary as never before.

This requirement cannot be met within the framework of the classical ideal of scientific explanation. This ideal consists in reference to a particular scheme of interacting discrete bodies as the ultimate link in the analysis. The new, non-classical ideal of scientific explanation has eliminated the middle links of analysis: it has a close affinity to the Spinoza's non-linear conception of Nature which interacts with its own self, it introduces into science the concept of interacting fields and a self-regulating system of particles whose very existence, rather than their behaviour alone, is the result of the interaction. The new ideal of scientific explanation consists in the inclusion in the analysis of the being of the Universe of the existence of certain classes of elementary particles and of the space, whose structure and evolution depend on the transmutations of particles and, in turn, determine the course of such transmutations.

A change in the ideal of scientific explanation has at all times been a turning point in the advancement of civilisation. What can the new, non-classical ideal of science give civilisation?

POST-ATOMIC CIVILISATION

In this essay, I would like to take advantage of an opportunity (a small one, yet significant for our prognostication for the year 2000) to give an outline of the postatomic civilisation. In the paper referred to above on the physics of elementary particles, Bruno Pontecorvo savs

that the question as to its practical effect is "almost illegitimate". It is this ``almost'' that I propose to take advantage of.

But first, a remark on the change of stages of civilisation in response to fundamental physical discoveries and generalisations. These stages are not partitioned by boundaries like Cuvier's cataclysms which destroyed the main features of the preceding period to leave a clear field for each succeeding stage. The change in stages is rather comparable to the change of acts in a play separated by the stage remark "The same, Enter. . .". In the 21st century the uses of classical and atomic physics will not be minimised or even reduced in number. It may be suggested that they will experience the dynamic effect (increase in the pace of acceleration) of the physics of elementary particles.

Of course, this has always been the case. The universal application of classical electrodynamics and of the classical electronic theory---electrification of production---- accelerated and diversified the uses not only of classical thermodynamics, but also of classical mechanics. Atomic power generation resulted in accelerated electrification. Applications of the physics of elementary particles will produce a "resonance effect" in classical and atomic physics.

The main emphasis of the resonance effect will, however, shift. In the atomic age, that emphasis is on power engineering, specifically the practice of including in power generation of extremely highly concentrated and virtually inexhaustible sources of energy. The time when that trend arrives at its end to produce a situation where expansion of the sources of power is no longer the most pressing problem of science and technology may be regarded as the end of the atomic age. Whether that occurs in the early or mid-21st century will largely depend on the practicability of controlled thermonuclear processes.

From then on the main scientific and technological problem and the main emphasis of the "resonance effect" will be the concentration of maximum power in minimum time and space regions. Revolutionary opportunities for such concentration will be associated with the processes

250

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

251

of annihilation of matter and antimatter. As exotic as they might seem today, these processes may become the scientific and technological point of departure for the post-atomic age, just as the processes of uranium nuclear fission, exotic in the 1930s, have been the point of departure of the atomic age.

By annihilating matter, the ultra-relativistic processes of particle transmutation would release all the rest energy corresponding to the entire rest mass of a given quantity of matter. According to the formula E---mc^^1^^, this energy is c^^2^^ = 9 • 10^^20^^ erg/g, or 10,000 times more than the amount of energy released by the fission of a gramme of uranium. To produce a gramme of antimatter capable of annihilation would require more energy than is released therefrom by annihilation. However, by isolating antiparticles, i.e. by separating them from particles, and by precluding their annihilation for a time, we would obtain a storage battery capable -of accumulating 9 • 10^^20^^ erg of energy in a gramme of matter: this is, clearly, the ultimate, ideal cycle, the teleological ideal of storage battery specifications rather than the specifications proper. Imagination suggests some sort of a vacuum trap containing isolated antimatter which consists of ``antiatoms''---antiprotons and antineutrons surrounded by positrons. A still more hypothetical super-capacity storage battery comes to mind. The reader will remember large-mass particles which fuse, releasing enormous amounts of binding energy, i.e. producing a tremendous mass defect, into particles which are either known today or which will become known in the future, such as the quarks of Gell-Mann and Zweig or Markov's maximons. If such particles existed they could be employed as storage batteries to hold the energy expended to produce them in a free state. By very intensive interactions and by conversion into particles of smaller mass they would release a part of this energy. This example, however, does no more than illustrate the variety of probable, or at least possible, ways of storing and subsequently releasing energy which, in principle, can approach the level of 9 • 10^^20^^ erg per gramme of matter. Regardless of the uncertainty of specific approaches, the prospect of such energy storage appears probable;

There are two types of super-capacity storage batteries that can have a revolutionary effect on the nature of future civilisation. One is a macroscopic facility utilising tons, possibly thousands of tons, of antimatter. This facility may be employed in spaceships to convert some of them into very large observatories developing high velocities by annihilation of matter in their booster rockets, while their stored energy will be able to take them to the periphery of the solar system and, to some extent, beyond. We do not have in mind spacemen travelling to the stars or even automatic spaceships approaching the stars and their planets: such journeys are unlikely at least for the early and probably the mid-21st century. In the solar system, however, Man will probably explore a part of the surface---and possibly mineral deposits---of the Moon and the planets nearest to the Earth---Mercury, Venus and Mars. As a storage battery, antimatter used in space rockets, along with and after uranium-powered and possibly thermonuclear rockets, will make inter-planetary vehicles more powerful, thus speeding up the exploration and utilisation of the natural resources of the planets of the solar system.

The leap from the solar system into the stellar world will not come as the continuation of that process. The exploration of the stars of the galaxy and of extragalactic objects will take place under new conditions because nuclear and subsequently thermonuclear and annihilationpowered rockets will permit not only to deliver astronomical and astrophysical observatories to the Moon and the planets of the terrestrial group---Mercury, Venus and Mars, but also to orbit them around the outer planets--- Jupiter, Saturn, Uranus, and Neptune. These observatories will transmit back data on events which we cannot conceive of at this stage. This will be the third revolution in astronomy, the first dating from Galileo's Sidereus nuncius, and the second from the observation from the Earth and its man-made satellites of both optical and radio, X-ray, neutrino and other radiations.

Super-capacity storage batteries will probably prove inadequate for inter-stellar travel: inter-stellar ships will utilise energy principles that are not yet clear. None the

252

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

253

less, space flights to the periphery of the solar system--- and possibly beyond it but not for inter-stellar distances--- will permit astronomical observations to be conducted under new conditions not to be found on the planets nearest to the Earth.

What may be called passive astronomical observations will be supplemented with new and active quests for extraterrestrial civilisations, unavailable in the "20th century. The spaceships of the 21st century capable of reaching the periphery of the solar system will carry not only observation tools but also equipment for transmitting into space information on our planet and its inhabitants. If there is anyone in the Universe looking for us, a search on our part, intensive in range and volume of information, will maximise the "effective cross section" of converging civilisations.

Although maximised, it will still be very small since it will involve a historical, a human time scale rather than a cosmic one. Yet there is not a grain of messianism in this expectation of contact: it is one of the trends in scientific search, one that in principle has as good a lease on life as, say, the search for quarks. This is a small but not a zero chance of acquiring some very concentrated information whose amount and value are not open to prediction. If we are to invest in a search for such information it will suffice to have the knowledge that these probabilities---- reception of our signal in inhabited worlds, reception of their signals here, a rational deciphering of such signals, the value of their information, etc.---are not nil. Man does the same in his quest for hypothetical particles whose existence is not ruled out by some sort of taboos and comes under some non-conflicting concept. But that is not all there is to it: the quest for extra-terrestrial civilisations makes sense even if the probability of receiving a response within a human life-span, not a cosmic time scale, were nil. For the traditional image of an old man planting trees ("They will bear fruit for others") is symbolic of a characteristic of civilisation and progress---and expansion in time and space of the subject in whose interest men work. This trend extends to increasingly more distant generations and increasingly larger numbers of thinking beings. There is no rea-

son why this process should be restricted to the Earth, the solar system or even the galaxy and to the time scale of the life of the terrestrial civilisation.

We do not intend to enter into a detailed discussion of the problem of extra-terrestrial civilisations. The keynote of prognostications for the year 2000 consists in the association of scientific and technological prospects with the choice of optimal economic plans, with the optimal structure of investment in the economy, culture and science, as well as in a certain plausibility of such prognostications. The atomic age covers the decades in which one cannot merely entertain the idea of possible shifts in technology and economy but must also draw comparisons between the potential benefits of particular shifts, identify the most probable shifts and the optimal economic structures to suit them. Within these limits the prognostication for the postatomic civilisation has a single objectve: to show that the atomic age is the cradle of a new stage of scientific and technological progress, a stage that cannot be seen today in terms of the choice of the economically optimal means of its realisation.

Accordingly, the concept of post-atomic civilisation includes shifts which are indefinable either in nature and trend, or chronologically. Not even the beginning of a postatomic civilisation can be timed for the opening decades or the middle of the 21st century. All that can be surmised, dealing with half-centuries rather than mere decades, is that after controlled thermonuclear reactions have supplied mankind with a virtually unlimited source of power the central task will be one of miniaturisation of power sources: in that sense super-capacity storage batteries are destined to play the role of practical utility units for the physics of elementary particles, a role comparable to that of uranium fission for nuclear physics, or that of electromagnetic induction for classical electrodynamics.

This approach to the application of the physics of elementary particles to energy storage consists in developing highly compact instruments in which the energy of annihilation will be converted into electrical, thermal, mechanical or chemical power and will produce powerful electromagnetic fields, high voltages, temperatures, pressures, ve-

254

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

255

locities of endothermic chemical reactions on millimetre and progressively smaller scales. This high-energy miniaturisation may achieve micrometric and submicrometric dimensions, with resultant revolutionary eifects on manufacturing technologies. In its own day and time, the application of electric power permitted to integrate the electric engine shaft with the tip of the worktool---drill, cutter, etc.--- thus eliminating long trains of power transmission. In our day, i.e., in a broad sense, in the period under prognostication which includes the end of the current century, quantum electronics has permitted to bring the energy of a variable electromagnetic field directly to the workpiece; in an increasingly greater number of operational steps this eliminates the need for an electric engine to convert electrical power into mechanical work or for an electric furnace to convert it into thermal power. Photons can be made to produce a direct effect even on the molecular level.

Power generation, however, is not yet open to miniaturisation. Power is concentrated in lasers which require to be supplemented with electric transmission systems to feed the primary sources of radiation concentrated in lasers. Where the energy of annihilation is used it is the source of power that is miniaturised: an instrument the size of a few cubic millimetres can be made to hold an amount of energy on the order of tens of thousands of kilowatt-hours without the need for feed cables or optical energy transmitters.

Two kinds of processes occur in Nature, which were discussed in the essay on information. The first category includes high-energy processes characterised by the absorption of large amounts of energy and relatively low increments in entropy and negentropy. Examples are the evaporation of water and rain or snowfall in the course of annual circulation, the accumulation of energy in chlorophyll, the release of energy by combustion, as well as all the other processes powered by the solar energy on a macroscopic scale. Classical energetics consists in the utilisation of just such processes. This category includes, with some reservations, the release of nuclear energy although that process is accompanied by a much greater change in entropy and is independent of the Sun.

The second category includes what may be termed highentropy processes. These consist in appreciable changes in entropy and, consequently, negentropy in response to very small changes in the levels of transmitted energy. An example of the latter category taken from G. Thomson was given in the essay on information---the arrangement of a pack of playing cards, which requires less energy than the amount released by the combustion of a molecule of paraffin.

In Nature, high-entropy processes belong to the DNA and RNA molecules in the brain of the higher animals, the highest negentropy being found in the human brain. In technology, these belong to cybernetic devices and the communications technology. High-entropy processes are those involved in the formation and transmission of information. In production as a whole, the high-entropy processes occurring in the human brain or in cybernetic devices imitating it control high-energy processes. Their function is that of a cargo dispatcher who writes the destination on, say, coal cars: the writing requires little energy, contains much information, and permits to create high negentropy. The latter, however, is merely initiated by the writing: it is realised only by actual shipment operations performed according to the written instructions.

Let us suppose now that the fuel is not coal but some substance which contains as many calories in a cubic centimetre as an entire coal train. The energy required to carry these cubes to their destination will not be much greater than that which goes into the writing of a destination and the forwarding of invoices. From this, it will be clear that the miniaturisation of power transformation and transmission by means of super-capacity storage batteries would produce a change in the relation between information and power generation, between high-entropy and high-energy processes.

This is not to say that miniaturised power generation technology can be made automatic without any high-- entropy processes to control it. The sorting out of invoices and the pasting of destination labels on power cargoes will probably become superfluous: it will be possible to deliver the cargoes themselves instead of the invoices. Translating

256

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

257

negentropic information into practical effect becomes almost as easy as its reception. Yet, in general, the formulation of long chains of complex high-entropy information must be made separate from high-energy processes. These chains permit a multitude of calculations to be made and the optimal scheme of high-energy processes to be selected before the latter actually occur. However, with supercapacity storage batteries the choice of the optimal scheme may include high-energy processes. Where the high-- entropy, i.e. low-energy, modelling is too complex, a cybernetic device is capable of causing a high-energy process, evaluating its results and producing a certain optimal solution. Thus, the cybernetic device will incorporate experimental units. Generally, a cybernetic mechanism, being a highentropy device with or without particular high-energy units, will be supplemented with a cybernetic mechanism with built-in super-capacity high-energy storage batteries.

The latter will probably be incorporated in muscle-- imitating systems. The essay on molecular biology discussed power units comprising artificial polymers capable of motor reactions. Built into such an artificial muscle, a storage battery which requires virtually no recharging for decades or even hundreds of years would make such mechanisms independent of outside power supply. Miniaturised to millimetre and smaller dimensions, they may be designed to incorporate a complex system of independent muscles, each associated with a system of artificial receptors. A polymer-storage battery ``organism'' may be made to include hundreds or thousands of such muscles, and the complexity of its functions will be practically unlimited.

In medical and physiological studies, important effects may be achieved by building into living organisms storage batteries with ability to function for tens of years and to produce a well-regulated large-scope system of electrical, thermal and mechanical effects (artificial heart, lungs).

The list of possible applications of super-capacity storage batteries could be extended indefinitely---it is but a matter of imagination. Yet imagination has a modest role to play in this book: it is limited to the construction of arbitrary illustrations of those prognostications which follow logically from modern trends in science and permit to

identify the eventual effect of the latter. The foregoing discussion of super-capacity storage batteries is no more than an illustration of the actual trend in the modern physics of elementary particles. The modern physics of elementary particles permits an increase in natural negentropy on the level of time and space units on the order 10~^^13^^ cm and 10~^^24^^ sec. These units, which may be several orders smaller, are probably the site not merely of continuous motion regulated by relativistic causality but rather of transmutations, i.e. changes in the existence rather than in the behaviour of particles of various classes. At that level, when antimatter forms, negentropy may increase in a manner more convenient for practical use.

Yet it does not necessarily follow that on the level of progressively smaller time and space regions negentropy implies an advance of civilisation to a higher stage. The terms "relativistic civilisation" or "atomic civilisation", just as the terms ``ultra-relativistic'' or ``post-atomic'' civilisation suit this concept of negentropy and civilisation. How legitimate is this concept?

The concept of civilisation is inseparable from the concepts of growth, progress and of a specific characteristic of Man, which came on the scene at the same time as Man did and which becomes increasingly more important as the time of Man's appearance on the Earth grows more distant. The definition of civilisation depends on the definition of Man insofar as he differs from Nature.

That Man is different from Nature does not imply that he is outside Nature; it means, rather, that Man's existence is not merely governed by the general laws of Nature, mechanical, physical, chemical and biological, but that he also puts mechanical, physical, chemical and biological processes to work to achieve goals set by the thinking spirit. The liberation of Man from purely biological adaptation to his environment, an advance to a specifically human mode of adaptation, to a purposive subordination of the forces of natural environment is civilisation, which appears with Man, whose growth marks the course of Man's departure from the time of his appearance on the Earth, and whose state is, in every particular epoch, a measure of the interval separating that epoch from Man's genesis and civilisa-

17-01545

258

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

259

tion. The growth of the inherently human element, of that aspect which is not to be found in Nature as opposed to Man, the ``humanisation'' of Man, his liberation from subordination to Nature, his ``de-bestialisation'', the increase of the ``human'' element in Man---all that is an integral definition of progress.

It follows from the above that an initial definition of civilisation must specify that variable whose growth is a measure of the progress of civilisation, and whose nonzero value is an indication of the emergence of the human species. That variable is the sum total of natural forces directed by Man's purposive activity, subordinated to Man's goals, and organised so as to translate into reality some sort of idea, an image pre-existing in Man's consciousness. Accordingly, civilisation is as old as labour, purposive activity and the use of tools. Man---separate from Nature and having subordinated the forces of Nature to achieve some purpose pre-existing in his consciousness---is a tool-making animal/^^1^^"

Tools are mechanical means used to put to work for Man's purposes forces exceeding Man's physiological capability (lever), to produce their effect at a distance beyond the immediate reach of Man's hand (stick, throwing stone), or to exert a pressure beyond the ability of the human hand (sharpened stick, blade). Next comes the range of natural forces used to produce temperatures over and above those of the human organism (fire). Next, human interference in the spread of useful plants (plant cultivation), in the conditions of vegetation (land irrigation), and the use of the potential and kinetic power of water in the balance of purposively employed power (water wheels), and so on.

Clearly, one of the integral indices of civilisation is the sum of the forces of Nature purposively organised by Man, or rather the ratio between such sum and Man's own strength which is a part of natural forces. This index is proportional to the productivity of labour, the principal economic index of civilisation. Another indication of progress is the pace of advance to a more expedient organisation of natural forces, the continued liberation of Man from

subservience to the blind forces of Nature, from the power of biological selection, the continued humanisation and de-bestialisation of Man. The underlying force of this process is labour which replaces biological adaptation to Nature by Man's adaptation of Nature to serve predetermined objectives. Later, Man begins to alter the nature of operational steps: in other words, he introduces deliberate changes designed to achieve previously determined goals into spontaneously developed technologies. The objective of the inherently human interference at this stage is not merely the result of a particular operational step but also its specifications, not merely "What?" but also "How?" Man considers a range of manufacturing processes, compares the alternatives and arrives at certain general concepts.

These are concepts of natural science. Meeting manufacturing specifications may stem from tradition: a search for new specifications stems from knowledge of the inner causal mechanism of events. A production technology, which assures continued progress, is an applied natural science. Technological progress is made possible by the application of the knowledge of natural science and by the search for structures and processes characterised by the highest measure of consonance not merely with technological standards but with physical schemes, e.g. a search for maximum efficiency.

With time, such quests produce a systematic and unbroken search for scientific truths that have no immediate applied value. That is a very important stage in the process of humanisation: Man liberates himself from the power of the immediate requirements of economic production. The key factor becomes a systematic ``non-profit-oriented'' scientific research. ``Systematic'' means that the sequence of research steps is determined by the inner logic of science, that, ideally, it does not depend on random or outside impulses.

Penetration into progressively less profit-oriented (i.e. unrelated to any immediate effect on the level of economic production and its dynamics, its progress) fields of science is associated with an expansion of the hierarchy of ordered systems known to Man, with a study of the structure

17*

* Karl Marx, Capital, Vol. I, Moscow, 1972, p. 175.

260

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

261

and states of macroscopic bodies, molecules, atoms, nuclei and finally subnuclear particles.

The reader should be reminded here of negentropy and noozones, zones of purposively ordered being, discussed in Part One of the book. Science detects the order, the laws of being, discloses its harmony, its negentr.opy which permits expedient use of natural entropic processes. It finds, and creates, temperature gradients in the random motion of molecules, and uses these gradients to turn such random motion into a well-ordered unified motion of large ensembles of molecules, into the motion of macroscopic bodies. That is conversion of heat into mechanical work. Creating temperature gradients, i.e., increased negentropy, requires consumption of energy in usable form. For instance, an increase in negentropy due to the higher temperature of steam or gas in a heat engine (an increase in the temperature gradient between the boiler or cylinder and the condenser) eliminates the gradient between the energy present in the fuel and the energy of thermal motion in the surrounding medium. Technology utilises the negentropy present in Nature and increases negentropy in purposively constructed man-made systems. Negentropy in such systems means precisely a purposive reorganisation of natural forces. Advances to progressively smaller structures producing increasingly greater negentropy are key landmarks in the progress of science and technology. Both the civilisations of the past and modern civilisation have wielded as their scientific and technological potential large amounts of accumulated power, large amounts of negentropy. Whatever the natural sources of power---water level gradient (hydraulic power generation), energy concentrated in fuel (heat engineering), or nuclei with a lower specific binding energy than in other nuclei (atomic power engineering)--- their utilisation has always consisted in producing gradients in temperature, gravitational or electric potentials over relatively large spatial regions. Compared with a post-atomic civilisation, both the past and modern times are periods characterised by low spatial concentration of accumulated power.

Precisely this concentration is the physico-technological key to the progress of civilisation and the increasingly more

expedient organisation of natural forces. If labour is the basis of civilisation, increase in the productivity of labour is the key to its advancement. Labour is also, and in the first place too, a purposive activity, a purposive ordering of Nature. This ordering process consists in the utilisation of natural negentropy to achieve increased negentropy under a preconceived plan---the feature that distinguishes the worst architect from the best bee---and a concentration of accumulated usable energy.

Whether it occurred in the forests of the coal age or milliards of years earlier at the time when elements formed with differing specific nucleon binding energies, the natural increase in negentropy is the source of the forces of Nature which are increasingly, as Man penetrates into progressively smaller spatial regions, converted into usable energy in Man-ordered purposive systems. Starting with quantum electronics, we are involved with microscopically ordered systems in which there is a coordinated movement of electrons to lower orbits within the atom. Yet the energy levels involved here are still low. In that sense, systems of antinucleons and positrons and other systems of antimatter are a revolutionary stage in the concentration of usable energy, a new stage in man-made negentropy in the microcosm. At this level, the purposive organisation of natural forces extends to interactions responsible for particle transmutation.

The nature of these interactions is a part of the subject of what is today the most fundamental problems of science. The fact that these problems are formulated at all adds to the intellectual potential of science and changes the style of physical thinking.

The concept of "style of physical thinking" was first suggested in the early 1950s by Pauli and Born who used the term to describe the relatively stable characteristics of physical theories, which determine or at least restrict probable prognostications in respect of the future development of physics."" The most revolutionary changes in the style of

* M. Born, "The Conceptual Situation in Physics and the Prospects of Its Future Development", The Proceedings of the Physical Society, 1953, Vol. 66, Part 6, Section A, p. 501.

262

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

263

physical thinking will probably coincide with changes in scientific thinking in general and, more significantly, with certain essential changes in civilisation.

What are the specific features of the modern style of physical thinking?

The first of these features is an integral'approach to particular problems, a revision of the basic picture of the world to arrive at the solution to the most urgent practical problems. It has been mentioned earlier that modern physics is faced with the very practical problem of eliminating the physically meaningless infinite values of mass and charge arising in any consideration of the interaction of a particle with vacuum. Elimination of these infinite values, however---a physically meaningful elimination, too, not merely a mathematical formula---requires solutions to such basic problems as the unification of the theory of relativity and of quantum mechanics into single theory of elementary particles.

The integral characteristic of the modern style of scientific thinking goes parallel with the search for definitive results. The ancients had a broad integral concept of the world and made some inspired guesses in natural philosophy that embraced all of the Universe. Yet, those were mere guesses; they were neither definitive nor, for that matter, ought to be so.

Starting with the latter half of the 17th century, scientific conclusions become increasingly more exact and experimentally verifiable---at least that became the scientific goal.

The modern scientific style is a very special synthesis of the breadth of ancient times on the one hand and of classical definitiveness and experimentally verifiable authenticity on the other: science is aspiring for the universal harmony of being, yet its tools are experimental and its findings are embodied in strictly quantitative terms which are sometimes inseparable from the statement of uncertainty, from ascribing precise and authentic values to probabilities of events rather than to actual events. It may be that the modern scientist like the scientist of the end of this century, will not improve on Aristotle in subtle thinking or the ability to embrace in thought the entire Universe:

ancient thought will always remain the scientific ideal in that sense. Yet, the modern scientist finds solutions to the very basic problems of the space and microcosm by observation and experiment, i.e. essentially definitive solutions.

Despite the fact that these specific characteristics of modern physical thinking are positively formulated and that the way in which our experimental potential will permit them to be realised is relatively authentically known, very little is yet known about the specific forms into which they will be translated, and about the practical results of that development. The anticipation of super-capacity storage batteries exemplifies actual tendencies, yet one cannot vouch that the early half of the 21st century will be described as the subnuclear age or the age of subnuclear storage batteries with as much right as the latter half of this century has been called the atomic age. We know that there are other names competing with the appellation "atomic age"---the age of cybernetics, the age of polymers, etc. Even if the name "subnuclear age" is warranted, there will probably be others to compete with it, and perhaps successfully, too. Thus, exploration of the planets and the first authentic information on extra-terrestrial civilisations could give a very radical new turn to the civilisation of the 21st century.

Let us return to the effect of fundamental science on the characteristics of civilisation. The humanisation of life--- the main indication of the progress of civilisation discussed above---is not restricted to Man's liberation from the power of biological, both pre-human and extra-human, laws. Humanisation includes Man's liberation from the stranglehold of uncontrolled social forces, "a leap from the kingdom of necessity to the kingdom of freedom". This freedom implies, in the first place, ability to transform Nature, manufacturing technologies, science and economy, it implies transformation of the world and has a dynamic aspect. Man acquires an increasingly greater measure of freedom not only by the elimination or neutralisation of outside forces threatening his stationary condition. Man acquires an increasingly greater measure of freedom positively and actively, he puts natural forces to work for his own benefit,

264

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

265

he organises them to achieve a pre-formulated ideal. This ideal increasingly consists not merely in maintaining a certain level of what Man obtains from Nature, but also in a certain rate of acceleration of the growth of that level and in the growth of that rate. Man's freedom as understood by Spinoza and discussed in Part One of the book, in the essay "Labour and Freedom", is nowadays concretely expressed as the possibility of an accelerated growth of noozones---the spheres of purposively ordered natural processes.

It is precisely this dynamic aspect of Man's liberation from natural forces and their subordination to Man that permits a direct link between humanisation as the mainstream of progress and basic science, the foundation of the highest dynamism in production and in civilisation generally.

Let us now take a look at Man as an integral organism characterised by a complex system of physiological processes. How does that system benefit from fundamental science?

The reader will remember the prognostication of controllable physiological processes on the cell level, i.e. the ability to influence both the statistical ensemble of cells and a specific cell or a small, statistically unrepresentative number of cells. The highly concentrated laser beam enables the physiologist or the doctor to work on specific targets rather than on whole areas. Along with photonic fluxes, relativistic particles of non-zero mass can be put to work: because the life-span of a particle varies with its velocity a particular energy may be imparted to a particle to produce a life-span that would permit it to reach a predetermined point in the living tissue and decay beyond it.

The statistical character of laws remains valid for such processes: what is determined is not an actual event---a photon or some other particle hitting a cell or cells---but rather the probability of such event. The latter becomes a reality subject to a large enough number of attempts. The statistics in this case, however, concerns particles rather than the cells: the flux of particles contains a large, statistically representative number of particles, which is the guarantee that a particular cell will be eventually hit.

Radiation genetics discussed above operates in much the same way. In this case, the statistical ensemble of particles acts on a single organism rather than on a statistical ensemble of organisms in which the desired traits are to be selected over several generations. The eradication of hereditary diseases in the remaining decades of this century is contingent upon medical genetics.

Mankind has still another hope which has a chance to become a practicable prognostication---eradication of cancer within the next few decades. It is conceivable that the discovery of highly effective cancer treatments will come from radiation therapy.

Cancer eradication may prove to be the result of some more general discoveries which will show up the entire mechanism controlling histological processes in the organism. Control of these processes will put an end to the naturally uncontrollable development of tissues resulting in malignant tumours.

Another task facing medicine and physiology is the discovery of the mechanism regulating metabolism, and control of the latter. A solution of this problem, even if it does not eradicate the occurrence of sclerosis and related diseases, will at least essentially slow down and reduce them. Here, another question logically arises from the foregoing: is there a limit to the step-by-step elimination of the causes of death? This is the problem of immortality. The word ``arise'' may not be the most suitable here: men have thought of immortality from time immemorial, since the inception of civilisation, since the human race appeared on Earth. The kind of immortality we have in mind is not the local sensation of the immortality of reason, of the content of Man's consciousness---the subject discussed in the essay "Optimism arid Immortality" in Part One of the book. What we have in mind is a physiological phenomenon, the immortality of a particular organism, infinite ontogeny. This question is coming on the scene in a new and quite tangible form. The causes of organic ageing and death have not yet been settled conclusively, nor have any absolute limits on immortality been discovered, i.e. processes which do not inherently lend themselves to termination by the combined effect of chemicals and physical processes on the

266

PHILOSOPHY OF OPTIMISM

PART TWO. SCIENCE IN THE YEAR 2000

267

level of tissues, cells and molecules. Clearly, the road from the absence of limits to a theory of ageing and death and subsequently to elimination of the latter is long and, possibly, purely imaginary. This problem is a most likely one for the 21st century. There is nothing today to refute the idea that the generation of men and women who will live in the year 2000 may be the last or the last but one mortal generation. The decades remaining before that year, however, will probably bring up many facts that will be in conflict with that idea. What is certain is that a systematic search for the mechanism of ageing will produce many sideline discoveries which will assure a significantly longer life-span.

Let us now turn from the biological aspect of the effect of science on human life to the social problem. Marx associated the objective trends toward socialism with the development of production forces. Lenin saw electrification as the scientific and technological basis of economic and social progress in a classless society. Prognostications of the practical implementation of non-classical science follow in the wake of these ideas. A consideration of the scientific and technological trends of the atomic age shows that the practical application of non-classical science requires a restructured society. For the late 20th century, the principal effect of that development will be represented by a continued accelerated growth of the productivity of labour. This process is realised through a virtually unbroken series of optimisation programmes in national economic balances and structure. Prognostication would be meaningless without optimisation or a planned economy: prognosticating is not prophesying, but one of a set of alternatives _f or future development to be compared with other alternatives in order to arrive at an optimal economic effect. Optimisation requires a practical opportunity for a purposive restructuring of the entire national economy in the light of continuously incoming fresh information on new scientific and technological trends, needs and opportunities. The modern revolution in science and technology produces a maximal effect where the anarchy of capitalist production is eliminated. Modern production translates into practical terms ever new trends which, in turn, on the scale of se-

lected industries and the entire national economy, stem from a virtually unbroken evolution of basic technologies and energetics. Modern science and its applications are implemented in actual production at maximum acceleration in a classless society.

Clearly, scientific progress per se cannot provide the motive force of social restructuring. Social relations, however, remain subject to the development of production forces; more significantly, this becomes more so as an increasing amount of applied science is put to work for production purposes, followed by scientific research proper which lays down teleological ideals for applied science, and finally by fundamental research.

Reference may be made at this point to the electrification programme put forward by V. I. Lenin in 1920. That programme was the synthesis of the scientific conception of social development, a conception that held the development of production forces to be the pledge of the inevitable victory of a harmonious social system, the outcome of the analysis of scientific and technological progress based on classical science. The modern plan for the restructuring of economic production rests on the same conception of social development and on an analysis of trends in non-classical science.

To conclude, I wish to say a few words on the question that inspired Thomas Moore in his picture of the happy future in Utopia. It is a very simple question: Will men be happy in the future which we can foresee? Man becomes accustomed to conditions which produced a sensation of happiness at the time of their occurrence. The Weber-Fechner law: sensations increase as logarithms of irritations, means that given a stationary set of factors perceived by Man as producers of happiness, the sensation of happiness will disappear. That sensation is like an electric field which in the absence of electric charges will be present only under variable magnetic conditions, with the difference that a reduction in the factors does not induce happiness. Man can be happy where what makes him so grows, and grows with an acceleration, too.

In a harmonious society the application of non-- classical science which guarantees Man's progressively greater

268

PHILOSOPHY OF OPTIMISM

PART THREE AN ECONOMIC CONCEPTION OF OPTIMISM

power over, and growing knowledge of, Nature creates a historically unprecedented situation: everything which makes Man happy---from the simplest joys to the loftiest, such as creativity and cognition of the world---increases with an acceleration, producing a fresh and undiminishing sense of happiness.

This is precisely the answer to the question "What for?", a question identified with the integral process underlying the prognostication for the year 2000. This process was discussed in connection with the economic effect of progress. The answer to the question "What for?", insofar as it relates to a particular technical and economic act, is: to maximise the fundamental economic index. What does Man want to maximise it for? What is the aim of a maximal level, a maximal rate, maximal acceleration and fulfilment of needs, achievement of production capabilities, a maximal breadth and precision of scientific concepts, of all cultural values and of the entire civilisation?

The answer ,to that question is simple, yet it integrates the infinite complexity of scientific, technological, social and cultural progress. Man must be happy. "That is Hegel, book wisdom and the meaning of all philosophy!""'

INTEGRAL GOALS OF SCIENCE

Now we pass over from forecasting to planning, from stating the immanent trends of science to active intervention in the process of accumulation, particularisation and generalisation of credible concepts of the world. The essay "De rerum natura" that was devoted to scientific ideals and the most fundamental problems of contemporary science dealt with the expedient choice of specific conceptions possessing greater credibility and ensuring the most speedy attainment of the objective truth. It also examined the immanent aims the researcher sets himself in constructing a scientific theory and choosing from the possible theories the one that is closest to truth. The criteria of such choice stem from the conception of the world integrated by an objective and at the same time heterogeneous ratio subject to different laws that cannot be comprehended purely logically, but require for their comprehension ever new comparisons of the logical analysis with experimental data. The impact of the scientific ideal on the choice of the specific scientific concept essentially expresses only one thing, viz., the connection between the specific theory and the most general principles underlying the ideal of scientific explanation peculiar to the given epoch, that which Einstein termed the inner perfection of theory. Aspiration for the scientific ideal, however, also includes the criterion of external confirmation: general principles are related to empirically verified conceptions.

Now we propose to deal with something else, but not with the question of why is a certain experiment conducted, why is a certain concept evolved, why is science moving

::' Heinrich Heine, ``Doctrine'', Werke imd Briefe, Band 1, Aufbau-Verlag, Berlin und Wiemar, 1972, p. 319,

270

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

271

along certain specified paths that join to form a common route. These are answered by the purely epistemological criteria pointed out above. Now we face another question: why, for what purpose does science move along its chosen route? A somewhat analogous question with regard to the economic development, its speed and , acceleration, was posed at the end of the essay devoted to the post-atomic civilisation. However, here, in discussing science and the process of cognition as a whole, we can no longer restrict ourselves to a reference to Man's nature, to his aspiration for happiness, the necessity not only of enhancing his power over Nature and his knowledge about it, but of accelerating the process of such enhancement. The question "What is the use of science?" that can be termed the "problem of expediency of science", includes a quantitative aspect: why does society allot a certain portion of its material and intellectual resources to scientific research, and precisely what portion of these resources should society spend on scientific research?

As soon as such a question arises, as soon as the concept of the structure of resources, the ratio of different investments made by society is formed, science appears as part of general and purposive social activity, it enters into the balance of social labour, and the determination of the social goal of science becomes an economic problem, a problem of the integral economic effect of science.

We will approach this problem by first making several remarks on the effect of modern science on labour: on the subject of labour---Man himself, on the quality of labour and its content, and on Nature as the object of labour and the totality of material processes that labour arranges in an expedient manner.

Considering science as a component of Man's purposive activity related to other components within a common structure, we do not add to the definition of science a pragmatic aspect. We discuss science only from one aspect, that of economy. The definitions of science as a reflection of the world, as a form of social consciousness and as purposive activity are not definitions of parts, but aspects of an indivisible whole, each aspect being distinguished only in static approximation, as it were, whereas in dynamics, in mo-

tion, in the history of science, they are merged and, generally speaking, cannot be separated.

In static approximation the aim of the productive activity of man is consumption in a rather simplified form: there exist certain requirements of Man, a certain structure of demands, and the structure of production must correspond to that of consumption. The demands themselves, in terms of their physiological, psychological, technological etc. content, determine the purposive concrete labour, while the structure of consumption determines the distribution of homogeneous labour among industries. Since science as a purposive activity is labour, it enters into this distribution and in this sense its goal is consumption: in the structure of labour expenditures and the distribution of homogeneous labour, the demand for knowledge, for information, becomes, in the final analysis, commensurable with the demands for food, clothes, electric energy, fuel, raw materials, machines, etc. But as soon as static approximation becomes inadequate, epistemological features interfere with the economics of science, and the very concept of consumption requires a certain generalisation. Science is a reflection of Nature which is infinitely complex and dynamic in its very essence, it is a process by its initial epistemological definition. It is a form of social consciousness, and is impossible without basic norms or attitudes developed by consciousness, but in its development science modifies these norms and attitudes. Science is essentially dynamic, and non-classical science renders this dynamism radical and obvious and brings out its relationship with the radical dynamism of economics. In economic terms, the goal of science is consumption, but if economics is viewed in terms of science with its dynamics taken into account, consumption in the traditional sense, as a fixed structure of social labour, gives way to dynamic consumption.

It is wrong to suggest that in dynamic production, consumption ceases to be a goal and a condition of production. Consumption does remain a goal and a condition of production and reproduction, but it essentially changes its character and structure.

Dynamic consumption, i.e., consumption whose structure does not so much consolidate the initial structure of pro-

272

PHILOSOPHY OF OPTIMISM

PART THREE. F.CONOMIC CONCEPTION OF OPTIMISM

273

duction as modifies, pushes forward and transforms it, clearly manifests components corresponding to those of the fundamental economic index: for the maintenance of the existing level of labour productivity certain needs are to be satisfied, for its growth---others, more complex ones, tor its acceleration---still more complex demands including disinterested scientific and cultural interests, already indicated above, that become such a human need that its satisfaction proves a very great ``profit'' to the dynamics of production, and the greatest advantage to the acceleration of its indices.

The relative importance of information grows in dynamic consumption. The consumption of information has a rather specific peculiarity. Francois Perrout once said that unlike consumer goods ideas do not disappear in their perception but are preserved. Moreover, they arouse a " resonance effect" as they never disappear but are confirmed and developed in new ideas. Science possesses this self-- accelerating non-linear ability, this function producing the most powerful dynamic effect.

Is it possible to combine the components of consumption within a single concept to achieve an integral goal of man's scientific activity, a goal related to the main definitions of science though not covered by them, that links these definitions with something more general embracing not only science, but the evolution of civilisation as well?

The concept of goal, as has been indicated, separates Man from Nature minus Man and from "Man minus Man" (i.e. Man whose existence is fully subordinated to the laws of Nature, without the freedom of choice that is the prerequisite of a goal, plan, labour). "Man minus Man" is Man whose labour is dehumanised. The entire development of civilisation is a consistent extension of Man's expedient activity, his liberation from necessity, from the power of the elemental forces of Nature that know no purpose and of the elemental laws of society. Such a liberation is a feature of civilisation, a measure of the distance covered by Man since he separated himself from Nature, since human civilisation appeared on the Earth.

In Part Two of the book it was pointed out that the productivity of social labour and its derivatives is a natural

measure of purposively grouped forces and objects of Nature. Labour consists in such purposive grouping. In science regarded as a reflection of Nature Man emerges primarily as Homo sentiens possessing historically developing means for the sensational comprehension of the world. In science regarded as a form of social consciousness Man emerges as Homo sapiens possessing developing logical methods for the comprehension of the world. In science regarded as an expedient activity Man emerges as Homo construens, as creative man who changes the natural grouping of the forces of Nature, realises his goals, selects in advance predictable results of objective processes and, accordingly, the initial conditions of these processes.

Is it possible to consider that the general, integral goal of science is the consistent extension of Man's purposive activity, his liberation from the power of elemental forces, the transition from quasi-purposive processes in Nature to purposive ones? Such a goal means subordination to Man of the elemental natural forces, as well as elemental blind forces of society, i.e. a "leap from the kingdom of necessity to the kingdom of freedom". This transition is based on the liberation of the very essence of labour---- conscious, free and creative activity---from the antagonistic social structure alienating it.

Which concrete goals of science follow from the general integral goal?

They are determined by the modern stage in overcoming the elemental and blind laws of social existence. They are further determined by the successes of natural science and application of non-classical science, by penetration of man-controlled processes into the subnuclear regions, on the one hand, and outer-space regions---on the other. The contemporary integral goals of science which can be realised in national economic planning and which include and are essentially based on the planning of science, are related to Man himself, to his labour and to natural resources. To the degree that "Man himself" can be separated from labour, the goal of science consists in lengthening life, eliminating diseases, and increasing consumption. With respect to the quality of labour, the goal of science consists in continuously transferring the chief content of

18-01545

274

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OE OPTIMISM

275

labour into increasingly more dynamic functions: from the maintenance of established processes to the regulation of alternating loads and regimes, then to the radical change of technological processes, and further, to the change of increasingly more fundamental principles implemented in technology and design.

With respect to natural resources, the struggle for their rational utilisation and the protection of Nature from exhaustion and pollution are the beginnings of a rather general and far-reaching tendency. The ensemble of natural objects under Man's control includes, as has already been said, the spectrum from the subnuclear world to the entire Hthosphere, hydrosphere and atmosphere of the Earth and even beyond, i.e. in such consciously designed purposive processes as space flights and the propagation of radio signals aimed at extremely remote targets. Simultaneously, the temporal scale of controlled processes is increasing. The scale and effect that Man's labour exercises on further industrial dynamics predetermines not only changes on a planetary spatial scale but also changes embracing decades and even centuries. That is why labour must now be accompanied by a kind of planetary-age-long calculation. Through his labour, Man initiates and controls planetary and age-long processes of nature. Herein is also embodied the liberation of Man and his labour. In essence, this process is an integral and unified goal of science.

Its concrete modifications are no longer grouped in selected disciplines, which is also an indication of and a way to increasing the potential expediency on the Earth. The established disciplines arrange knowledge to reflect the different aspects of Nature, irrespective of the purposive arrangement of its processes, and thus reflect " Nature minus Man". Modern interdisciplinary scientific undertakings, contrariwise, make science a purposive activity. The increase in consumption is an almost indivisible complex of physical-energetic, pedological, geological, biological, molecular-biological and chemical problems. Lengthening life involves physics, chemistry, biology, etc. The transformation of the quality of labour is based first and foremost on cybernetics, i.e. mathematics and physics, but the realisation of the possibilities of cybernetics

concerns all subjects. The rational utilisation and protection of Nature is not only a tangle of geographical, geological and biological problems, but also of such problems as transformation of uranium into thorium in atomic power engineering.

In the sciences of Nature, the goals of science figure as goals, whereas in social sciences goals are determined by consequences, and in the border region---the history and theory of science---by consequences and impulses. In terms of consequences, of causally determined events, and not of goals, natural science is a monologue of Nature, social sciences are a monologue of Man, and the history of natural sciences and technology is a dialogue between Man and Nature.

However, let us revert to the goals of science, to the integral goal pointed out above, to the tansformation of labour, its subject, its content, its object. The subject of labour is Man. Under modern conditions, Man primarily has the right to count on maximum longevity and maximum duration of capacity for work. The concluding essay of Part One was devoted to this problem. Here we shall only add a short remark to what has been said.

It may be suggested that in the field of theoretical medicine and, accordingly, in clinical practice, non-classical science is now on the eve of a powerful chain reaction of effective application similar to that which occurred in energetics in the forties and fifties, and in cybernetics in the sixties. This is not so much linked with the application of relativist and quantum effects in medicine---though for instance relativist particle irradiation may prove a very important direction---as with the general rise in the intellectual potential, methods of mathematical analysis and experimental possibilities. This rise, in the final analysis brought about by non-classical science, embraces also classical fields and problems. Objective prognostication for a new possible great rise in modern theoretical medicine, and the integral goal of science give rise to a certain structural shift, concerted efforts directed to an essential decrease and effective treatment of cardio-vascular, oncological, virus and hereditary diseases.

18*

276

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

277

Man's interests---the chief component of the integral goal of science---are also specified in the concerted effort aimed at solving the food problem: production of synthetic foodstuffs and, as the main channel, a decisive increase in agricultural production efficiency.

Realisation of the second component of the integral goal of science---the changed character of labour, the increased proportion of the creative reconstructing functions in the content of labour---is based on two objective forecasts, one of which is mainly related to theoretical thought, the other, to application. What is meant here is the mathematisation of science, and, as far as application is concerned, the prospect of rapid and extensive utilisation of computers which already now permit to create and reproduce increasingly more complicated, practically applicable processes as the structure of these processes acquires a quantitative character. In modern technology (lasers, nuclear and radiation processes), power engineering (atomic energy), medicine, economics, the new methods in their nature and scale are accompanied by quantitative-numerical operations, sometimes coinciding with these operations. For this reason theoretical and applied mathematics, which are practically inseparable now because application of mathematics requires a modification of its foundations, is becoming the initiating force and condition for the transformation of the quality of labour. This synthesis of science coinciding with its application has very few historical analogies, it is virtually unprecedented.

The third component of this goal, the rational arrangement of actual and potential natural resources, is now based on the further evolution of atomic energy, on fast neutron reactors, which necessitates the utilisation of widespread raw materials and renewable fuel in power engineering. But this is only one component of the physicalenergetic, chemical-technological, geological, geographical and biological research creating scientific foundations for the protection and transformation of Man's environment.

The connection between the modern goals of science and the transition to non-classical foundations of technology can be seen from any aspect: energetics, technology, character of labour, system of utilisation of natural resources

Let us take, for instance, the technological aspect. The radical liberation of Man from subordination to the laws of Nature, the decisive re-arrangement of objects of the Universe and forces of Nature pursues the technological aim of producing everything from everything, i.e. getting any substances with certain predetermined properties, from any initial substances. This task is connected both with energetics as great power is needed for the regrouping of particles on the nuclear level, and the system of utilisation of natural resources ("from everything" means: "from substances easily accessible, concentrated in rich deposits not to be depleted for many years to come, and not disturbing the ecological balance when extracted and processed").

``Everything from everything" is not only a regrouping of molecules and atoms---the initial substance may not have the necessary ones. This regrouping involves quantum objects in the subnuclear world---particles in which a combination of corpuscular and wave properties is essential, i.e., a synthesis of the hierarchy of discrete particles and radiation spectrum. Science cannot start towards its modern goals without changing its fundamental basis or including quantum relationships in its initial principles.

Planned scientific and technological revolution is a process of purposive transformation of the subjective and objective components of production directed towards optimal prognostication to the highest degree corresponding to the integral goals of scientific and technological revolution. It follows from the above-said that the modern scientific and technological revolution must inevitably be an implementation of non-classical science.

The goals of science listed above which realise its integral goal demonstrate a very interesting theoretical and practical peculiarity of modern science directly related to the dynamics of its structure. In national economy, nonclassical principles not only call to life new branches, but give an impetus to old branches, causing a resonance passing from one branch to another with almost no attenuation factor. In a like manner, new non-classical principles in science not only pose new problems, calling to life new directions of research and new disciplines, but they also bring about a reinterpretation of old classical problems.

278

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

279

In terms of the integral goal of science nuclear processes necessitate the study and solution of seemingly remote problems that are quite classical in their nature.

It follows that the results of non-classical science do not lead to a radical reduction of work in the branches where they are not implemented directly. The atomic age has called to life many new branches of science, but it does not eliminate or even diminish the importance of any of the old ones. Non-classical principles not only preserve but increase the volume of research in the traditional branches. The search for an explanation of such a relationship may lead to the principle of conformity: since non-classical conceptions turn into classical ones as corresponding processes assume scales particularly important for practical life, the evolution of non-classical concepts is inseparable from classical approximation. For this reason an increase in the intellectual potential and everything connected with it pertains to all branches of science, everywhere stimulating extensive and profound research.

The general exponential expansion of investment in science apparently must slow down at a certain stage. We wish to recall the humorous illustrations of the unattenuated geometrical progression in production cited in the essay on information. When speaking about science these illustrations could be supplemented by predictions about the number of scientists exceeding the number of people on the Earth, about the publications of scientific journals covering the surface of the planet with a thick and ever increasing layer. Effective methods will be found for storing and transmitting information which will bring about a further acceleration of scientific progress with lower expenses. But for the present this fundamental change can be brought about only provided investments in science are expanded, within conceivable bounds naturally. As we know, there exists a proportionality between the investments in the different layers of science, with the volume of each layer subject to the general architectural project including the optimal distribution of the entire volume. A similar proportionality determines the volume of science as a whole. Science as a purposive activity is a part of labour, which is purposive influence exerted by Man on Na-

ture and on himself. The architectonics of this influence defines both the goals of science and the optimal volume of investments in science that is a part of a rational distribution of resources, of a rational structure of society's labour efforts.

The concepts of structure and optimal volume are fundamental concepts of the theory of planning. They are essential for a switch over from the theory of forecasting to the theory of planning and for lending to the conception of optimism its modern and, in particular, its metric meaning, for considering optimism a measure of correlation between the ``is'' and the "ought to be", statement of fact and statement of goal, forecast and plan.

The metric meaning of optimism will be dealt with in a special essay in this book, "Econometry of Optimism"; here we shall limit ourselves to a single remark. Optimism has a corresponding metrical equivalent---the measure of the correlation of the forecast and plan, i.e. a certain greatest value (or the smallest reciprocal value), analogous, for example, to the operation integral in mechanics. It characterises, in general terms, a certain system consisting of measurable elements, the optimal structure of the system, the optimal values of the elements giving the greatest integral. As has already been said in Part One of the book, beginning with Galileo optimism became dynamic: Man's hopes embraced not only and not so much the levels as the derivatives, the speeds of change, whereas the ideal of immobile stable existence ceased to be an optimistic prospect. Accordingly, we are considering dynamic structures in which the optimal features of the elements and their resulting maximum total are related not only to the levels but to the time derivatives as well.

In Nature (in Nature minus Man!) the optimal size and optimal change of elements may be defined statistically, post factum. The correspondence between the environment and size of the plant population is established as a result of the destruction of the greater part of the latter. In mechanics, the existence and motion of the system is guaranteed ante factum by relationships limiting the number of degrees of freedom. It is a peculiar feature of Man to define ante factum his contribution to the world evolution

280

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

281

through a previously conceived notion of the result of the changes he introduces into the processes of nature. Such a conception appears as the goal and Man's activity becomes purposive activity, labour. This notion is based on observation and generalisation of natural phenomena, that is, on scientific investigation of nature. Should not Man's activity be subject to a goal but to an analogue of mechanical relationships, should the optimal direction and the limits of its activity correspond only to the post factum static or dynamic balance of social production, should macroscopic processes achieve an integral result through a statistical neglect of individual destinies, then labour will be bereft of its most important genuinely human content---it will be alienated labour.

We have approached the concept that played an essential role in the origin of Marx's economic conception, but did not enter into its final formulation. It seems that the destiny of this concept permits a deeper penetration into its meaning.

The disappearance of ``alienation'' was not a departure from the problem but its solution in the sense of `` sublation'', i.e. modification and preservation in a radically modified form, in the sense of a higher turn of the spiral of knowledge. That which was considered to be a purely philosophical problem proved to be an economic one as well. The economic problem has not forfeited its philosophic character. In economics, production, labour, the fundamental problems of philosophy are solved, including, first and foremost, the problem of the authenticity of knowledge and, in connection with it, the problem of Man's existence. Hegel's path from political economy (his criticism of Adam Smith and other early interests in political economy) to philosophy was a refusal to resolve concrete problems concretely. Marx's path was a concrete solution of seemingly abstract problems, that were, in essence, a tangle of those damned questions that had for centuries been torturing the minds and the conscience of mankind. Beginning with his Phenomenology of Mind, Hegel departs from social problems, from economic contradictions that had so much interested him somewhat earlier. He departs from them into an ivory tower, into a tower of pure thinking, to the de-

veloping absolute spirit which is embodied in immobile nature.

Kierkegaard who considered Hegel's path illusory, saw no real way of reconciling Man with Nature. He proclaimed fatal irreconcilability. There is nothing in Nature akin to Man. It is immortal and infinite and Man is mortal and limited in space. Kierkegaard did not see anything in 19th century science but absolute laws alien and hostile to mortal Man possessing mere local existence, for they ruled out the individual autonomy of Nature's finite elements. This pessimistic evaluation is sometimes repeated nowadays. But its rehashing differs from the really tragic lonely moan of the Danish thinker mainly in its coquettish and exceedingly complacent display of affected pessimism (how can we help recalling Hegel's Phenomenology of Mind, in which he distinguishes "sick conscience" from "modern Weltschmerz whose representatives too much cherish their bad fortune, parading it, to be really unhappy"). Kierkegaard's contemporary followers also differ from him in not seeing Nature's proximity to Man at a time when non-classical science most clearly demonstrates this proximity.

Marx's concept of alienation is opposed to Hegel's way as well as to Kierkegaard's way or rather his rejection of a way. Marx's very evolution of the concept of alienation was, in a sense, the reverse of Hegel's evolution. Marx discovered the real basis of the pessimistic idea about mortality and solitude of individual reason in the face of infinite Nature, that runs through the whole history of social thinking and social psychology. The obj edification of Man, the transformation of Nature humanises the latter. In his Economic and Philosophic Manuscripts of 1844, Marx says that industry humanises Nature, revealing the real relation of Man and Nature, the natural human essence:

"Industry is the actual, historical relation of nature, and therefore of natural science, to man. If, therefore, industry is conceived as the exoteric revelation of man's essential powers, we also gain an understanding of the human essence of nature or the natural essence of man.""'

::' Karl Marx, Economic and Philosophic Manuscripts of 1844, Moscow, 1974, p. 97.

282

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

283

This very profound conception logically closely associated with Marx's subsequent economic ideas about the humanisation of Nature and the understanding of the natural essence of Man constantly comes to mind in the analysis of the historical evolution of industry and natural science. This evolution really shatters the illusion of immobile and infinite Nature alien to finite and mortal Man. Man finds exteriorisation in labour, stepping beyond the framework of purely local existence. He arranges the elemental forces of Nature, increasing the level of its negentropy, so that Nature appears before Man as a totality of human essences, of objects of human (genuinely human!) rationalising activity, and this activity itself, i.e. genuine human activity, Man's ``essential'' powers, as Marx says, reveals its relation to Nature, and proves to be the "natural essence of man''.

The illusion of Nature being alien to Man is based on quite real alienation of labour. If labour is alienated, if it is subject to antagonistic hierarchy, as is the case in a class society, then Man does not realise through labour his objectifying function, his conscious arrangement of the forces of Nature or his search for the ratio of the world. In other words, labour is divorced from science.

The conception of alienation so thoroughly analysed in the Economic and Philosophic Manuscripts of 1844 and generally in Marx's works of the 1840s, subsequently, as has already been said, disappeared from his works. It disappeared because it was embodied in the system of direct economic categories of the Capital. But the conception of alienation reveals even now the logic of the Capital, its relation to Marx's real philosophic interests and the importance of economic categories and definitions of labour for the evolution of optimism.

In particular, Marx's concept of alienation brings out the relation between the concepts of labour, optimism and freedom.

Whatever definition freedom might be given, it must preserve Spinoza's conception about the revelation of the real essence as opposed to external impulses. The revelation of the real essence is peculiar to natura naturans, the dependence on external impulses---to modi. Labour

is a revelation of functions inherent in Man; in this sense labour is a realisation of freedom, but labour stems from forces and regularities of Nature that are independent of labour. In this sense labour belongs to the realm of modi, to the realm of necessity. The evolution of labour is an ever greater revelation of the internal immanent traits of Man. The main stage of this evolution is the transition from alienated labour in antagonistic production to free associated labour. In the third volume of the Capital Marx writes: "Freedom in this field can only consist in socialised man, the associated producers, rationally regulating their interchange with Nature, bringing it under their common control, instead of being ruled by it as by the blind forces of Nature. . ..

``But it nonetheless still remains a realm of necessity. Beyond it begins that development of human energy which is an end in itself, the true realm of freedom, which, however, can blossom forth only with this realm of necessity as its basis."""

These lines contain the quintessence of the economic conception of optimism. The initial concept is the interchange with Nature, influence of Nature on Man, a totality of flows of matter and energy evoked by the hands and, in the final analysis, by the brain of Man, which, in their turn, brought about Man's own evolution. These flows involve conscious goals, purposive activity, labour. But until the totality of interchange with Nature is regulated by socialised Man, i.e. associated producers, until the very goals of selected production acts governed by elemental and blind social laws are united, labour as a whole, production as a whole, interchange with Nature as a whole, will not become purposive activity, but will be subordinated to ihe blind forces of Nature, controlling Man instead of being ruled by him.

Then a "leap takes place from the realm of necessity to the realm of freedom". Not only single production acts, but the entire production is controlled by the collective will of the producers. Necessity continues to reign. It is no longer necessity in the form of blind social forces, but it

* Karl Marx, Capital, Vol. Ill, Moscow, 1971, p. S20.

284

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

285

is rather necessity, regularity, processes of interchange between Man and Nature, bearing an objectively regular character. Due to this necessity and on its basis "a true realm of freedom" emerges. The development of human powers is becoming an end in itself, in the sense that Man's happiness, his longevity, the development of his intellect, his emotions, his morality, the transformation of his labour, the concentration of his intellectual powers on more radical changes in production, on ever more adequate cognition and transformation of the world---this genuine development of human powers---are no longer a means toward achieving some end, but, on the contrary, everything is used towards attaining this integral goal of man.

SCIENCE AND ECONOMIC DYNAMICS

In Discours sur les Arts et Sciences Jean Jacques Rousseau recalls a legend that came from Egypt to Greece, which says: "Sciences were created by a god hostile to Man's peace of mind". Naturally, neither the Egyptians, nor the Greeks nor Rousseau suspected how deeply and fully science will become associated with the disturbance of Man's peace of mind predicted by the legend. Non-- classical, quantum-relativist science disturbs it in a most violent manner. This science itself is imbued with anxiety, lacking the Victorian belief in immutable scientific axioms. Its application evokes not only direct and alarming anxiety about the destiny of the world, but also that "sacred anxiety", incompatible with stagnation and statics, which is associated with forecasts for a rapid and accelerated growth of the material and spiritual powers of humanity.

Now we propose to deal, in a somewhat axiomatic manner, with the positive gift of the restless deity. Dynamism is so characteristic of modern production, and the powers transforming the economic structure are so great and effective, that it is no longer possible to consider the dynamic regularities as empirical amendments to the primary static or quasi-static regularities. On the contrary, derived from the fundamental law, they explain its more profound, dynamic nature and the approximate character of its static interpretation.

In this case we are dealing with conceptions of great generality that are capable of assuming new forms in the transition to new economic phenomena. This ability is now being realised with regard to production that possesses a higher degree of dynamism than in the past. Accordingly, the relation of the definitions of value to the concept of economic dynamics and the revolutionising economic effect of science are becoming more obvious.

It follows from Marx's views of the law of value in the capitalist mode of production, of the underlying basis of the law which is inherent in any economic formation, of the significance of quantitative balances for socialist production, and from the entire dialectics of the Capital that the law of value is not reduced to a mere balance between a particular consumption structure and a production structure that is established as a result of labour migration. This inference, as will be seen later, is of paramount importance for the theory of economic dynamics. The simplest and most general explanation of the law of value offered by Marx in the well-known letter to L. Kugelmann that the necessity of the distribution of social labour in definite proportions cannot possibly be done away with by a particular form of social production,"" is only the beginning. The question that comes up next, raised in the same letter to L. Kugelmann, is why the form, in which this proportional distribution of labour asserts itself, is precisely the value of these products. Finally there is the question of the conditions of transition to other ways in which a proportional distribution of labour can be realised and of other proportions characteristic of a different production dynamics.

A reference to the basic need for a proportional distribution of labour independent of a particular form of social production takes the discussion over into the field of defining production in general, with which Marx deals in the Introduction to A Contribution to the Critique of Political Economy.** "Production in general" is an abstraction but a sensible abstraction insofar as it actually em-

* Karl Marx and Frederick Engels, Selected Works, in three volumes, Moscow, 1973, Vol. 2, pp. 418-19.

** See Karl Marx, A Contribution to the Critique of Political Economy, Moscow, 1971, pp. 189-92.

286

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

287

phasises and defines the common aspects of production and thus avoids commonplace repetition since it presupposes eventual transition to concrete concepts predicated on a particular stage in the development of production. Abstract definitions as such provide no key to the understanding of the specific aspects of a historical stage of production that exists in actual fact.

The historical destinies of value and its modifications cannot be defined by the concept of "production in general". Indeed, a historical stage of production is primarily determined by the nature of its productive forces, and abstract conditions assume cognitive value only when they include such concrete definitions. What is the nature of such cognitive value? To start with concrete concepts would mean to be faced with a chaotic picture of the entire whole, and, in the words of Marx, "through closer definitions one would arrive analytically at increasingly simple concepts; from imaginary concrete terms one would move to more and more tenuous abstractions until one reached the most simple definitions".""

It is here, Marx says, that the way back begins which leads no longer to a chaotic picture of the entire whole, but to a "rich totality of numerous definitions and relations". This is a genuinely scientific method, in which the concrete presents a synthesis of a multitude of abstract definitions. "It appears therefore in reasoning as a summing-up, a result, and not as the starting point, although it is the real point of origin, and thus also the point of origin of perception and imagination. The first procedure attenuates meaningful images to abstract definitions, the second leads from abstract definitions by way of reasoning to the reproduction of the concrete situation.'"^^5^^'*

It is not hard to see that the criterion of "inner perfection" and the axiomatic method which proved to be so fruitful in all realms of knowledge, constitute an advance from the abstract to the concrete outlined in the most general and exact form in A Contribution to the Critique of

Political Economy. Contrariwise, an analysis restricted to the method of deducing the abstract from the chaotic, dismembered, disordered concrete, and the view that the analysis is complete with these concepts, opens the way to arbitrary, absolute definitions and general determinations introduced artificially, on an ad hoc basis, in order to interpret separate aspects of the concrete world divorced from all the others.

Value as a category, far from being reduced to a simple general definition, is a "rich totality with numerous definitions and relations" possessing a wealth of concrete definitions. These concrete definitions permit to assess the significance of value in its relatively elementary forms. Marx wrote: "The anatomy of man is a key to the anatomy of the ape. On the other hand, rudiments of more advanced forms in the lower species of animals can only be understood when the more advanced forms are already known. Bourgeois economy thus provides a key to the economy of antiquity, etc."*

If the law of value is considered in the light of modern economic dynamics the link between this law and the revolutionising effect of science in antiquity, in the transition from traditional and invariable relations between industrial operations to changing relations, becomes more obvious, though not to such an extent as in our time.

According to Marx who took a different view from that of Ricardo, the law of value may be modified and, moreover, is inevitably modified, because it is part of a sociological conception considering economic categories in their change dependent on the growth of productive forces. In Marx's theory, value is social labour with a certain proportional distribution among the branches of social production.

In Marx's economic theory abstract labour is an abstraction in the dialectical sense, an abstraction which is the highest form of concreteness possessing a wealth of definitions, mediations and relationships. Abstract labour does not abolish, but rather unifies Man's useful activity, en-

* Karl Marx, A Contribution to the Critique of Political Economy, Moscow, 1971, pp. 205-06. ** Ibid., p. 206.

::" Karl Marx, A Contribution to the Critique of Political Economy, p. 211.

288

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OK OPTIMISM

289

closing in it a certain structure dependent on the development of the productive forces of the society and expressing the social character of production.

The view of society as the subject of production, of the social character of labour underlying the phenomenon of value, permitted to link this phenomenon with different structures of production. Marx wrote to L. Kugelmann that society needs distribution of labour in definite proportions. Why is it necessary? And what are the concrete proportions, what structure satisfies a given necessity? The structure can indeed be of various types: demands for reproduction on a quasi-stationary technical basis with variations of labour productivity in different branches and demands for reproduction based on the accelerating general integral growth of this index, are different requirements satisfied by different proportions.

Marx studied the dependence of proportions on the character of reproduction, pointing out the proportions which correspond to simple and extended reproduction. Reproduction on an extended scale considered in the second volume of the Capital takes place on a quasi-stationary integral level of the productivity of social labour which changes comparatively slowly. Such an integral level corresponds to a detailed structure of production: a different optimal structure is necessary for the maximum magnitude, for the maximum rate of growth and for the accelerated growth of labour productivity.

In considering production studied by Marx, it will be seen that shifts are continually taking place in labour productivity of the different branches. They evoke changes in the value of commodities and accordingly, migrations of labour from one branch to another. The law of value restores the balance broken by the change in labour productivity in separate branches. The growth of labour productivity means a reduction in concrete labour per unit of the given product. Accordingly, on the strength of the twofold character of labour, the share of the equated, homogeneous, abstract labour necessary for the given branch is being reduced. It happens due to the reduction of the value of a commodity unit caused by the reduced measure of abstract labour materialised in a unit of the commodity. Value does

not depend on the labour put into the given branch; on the contrary, labour itself is defined through value. Thus, the law of value appears to be a law of balance which is restored through local changes of labour productivity in the different branches.

A typical picture in the 19th century was a rise in labour productivity in a given branch against the background of an unchanged general level. The analysis of the balancerestoring mechanism proceeds from the model of the quasi-stationary general level of labour productivity that serves as the background for the rise in the given branch. This is quasi-stationary production which, we will state in anticipation, requires modifying in the period of electrification, and still more so in the atomic age.

The capitalist mode of commodity production is characterised by absence of immediate regulation. Concrete labour itself, labour in its natural form, does not serve here as an object of social distribution and becomes social labour, losing its concrete form in the process of transformation into abstract labour. The market anarchy rules out a conscious preliminary estimate, leaving it within the limits of separate enterprises connected with each other by blind, statistical laws. In production that has no market nor elemental laws ignoring individual destinies, just as thermodynamic laws ignore the motion of selected molecules, the producer's labour enters in its natural form into social labour to become the object of immediate regulation. Such, says Marx, is the labour on a feudal estate and in the patriarchal industries of a peasant family.

In the patriarchal community and the feudal estate production immediate regulation of individual work means that useful concrete labour is social on the strength of its very usefulness. The effect of labour is known in advance, it is its goal and that goal---the effect known in advance---connects the labour of the individual with the labour of other people. There is no fundamental uncertainty of the effect that results from the mediate socialised labour in its materialised form, nor is there a fundamental uncertainty of the purposive aspect of labour. "It was the distinct labour of the individual in its original form, the

19-01545

290

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

291

particular features of his labour and not its universal aspect that formed the social ties at that time."!:"

In the patriarchal community and on the feudal estate the producer knew in advance where the product of his labour would go, what its purpose was, how his labour was connected with the effect known in advance, with the labour of other people---all this was regulated by instructions and mainly by tradition. The social division of labour coinciding with the technical one in the patriarchal family and on the feudal estate was essentially conditioned by the traditional character of labour, by sequences and quantitative correlations repeated during many years and by the production structure that hardly ever changed within one generation. Here the unequivocal regulation of production sanctified by tradition and authority of the type "so many head of cattle must be driven in at such an hour, the milking must be finished at such and such an hour, the churning of the butter will begin at noon, horses for its transportation will be brought after lunch, etc.", which never gave way to uncertainty, or to a materialised relationship of different operations, came close to the image of a rigid system with an explicitly unequivocal relationship of elements. Labour here is alienated not by a statistical law ignoring the individual destinies, but by authoritarian tradition providing for all the details to the exclusion of free choice.

This traditional and consequently structurally unequivocal production with its explicit and direct relationship of producers, was replaced by a mode of production characterised by a fundamentally indeterminate effect of each isolated labour act related to others through exchange. This indeterminacy governed individual destinies but was to a degree eliminated in production as a whole by the statistical law of great numbers which ignores the plans, intentions, will and destiny of the separate participants of the social production.

The capitalist mode of production with its indeterminate immediate effect linking one element of dismembered labour with another is replaced by a socialist planned mode * Karl Marx, A Contribution to the Critique of Political Economy, p. 33.

of production which eliminates such indeterminacy. Does it mean that the indeterminate effect disappears entirely and the society returns to an unequivocal rigid relationship between labour acts?

No, it does not. Such an unequivocal rigid relationship excluding any indeterminacy was based on the traditional character of production, on an insignificant tempo of technological and structural shifts in production. Only tradition guarantees such a relationship and, correspondingly, the possibility of determining the effect of each production act with any desired degree of precision. We shall consider this question in greater detail.

The regulation of production may be direct and completely rigid at one and the same time, if labour acts are connected with one another as links of an established technological process. Can all social production be built on such a pattern? Can separate works making up an industry be converted into a gigantic assembly whose parts will be connected by the unequivocal result of each process, a result which appears to be the initial point of the following process?

It can be done only on one condition. Production must include only established processes so that the result of each process could be exactly determined in advance and not a single process could be experimental without its result known in advance. The only research that should accompany such production are check measurements of temperatures, tensions, pressures, composition of raw materials, quality of production, i.e. research which ensures not the change of technology but its invariability. Such production will obviously have an invariable structure and, while retaining the same proportions, it will only grow through an increase in capacity and in number of shops and works.

Production in the patriarchal community or feudal estate was a primitive prototype of stationary production, with time measurements of the type "while the dew is still on the ground", and production parameters of the type "up to the waist", etc. It is a real prototype of a fictitious picture. For after the industrial revolution the only possible type of production is that which has become applied natural science, which ensures at least a sporadic and local

19*

292

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

293

increase in labour productivity, which cannot be stationary in technology and structure.

It can be shown that indeterminate results of separate connecting links of social labour, i.e. indeterminate technological results and structural shifts in production, are inevitable in such production.

As has already been said, for an economy with a continuously changing structure developing not only quantitatively but structurally as well, two indices are essential: P, the level of labour productivity reached by production ( corresponding to a certain production structure), and P', the change of this level, its speed, the derivative of P with respect to time (also corresponding to a certain structure, to certain proportions of the distributed social labour).

A most essential distinguishing feature is that the result of production is not only energy, metal, machine tools, fabrics, etc., but also information about new technological processes, new energy carriers, new alloys, new designs for machine tools, new types of fabrics, and generally information about new parameters. Obtaining such information involves experiment, i.e. an act whose result is probable but not known in advance with any degree of reliability. Now production includes an experimental component. That includes not only design offices and laboratories, not only physical-technical or chemical-technological experiments (answering the question what will be the efficiency with another use of fuel, what will be the hardness, electric conductivity and other properties of the alloy given another admixture, and the like), but also technological-economic experiment (what will be the cost of the operation with new technology) and economic experiment proper (information about the structural shifts associated with new technology, new deliverers and consumers and the market capacity), and the total information about the cost price and economic effect. The economic experiment is not undertaken at the given works. The consumers, whose exact number is almost never known in advance, determine what they may gain by the new technology that will lower the cost price of the article they need, that will replace the article by another one or change its properties, size and methods of its application.

There is another side to the question. Production that not only follows the traditional practice confirmed by authority and observes the parameters indicated by tradition, but also seeks a new formula and new parameters, must have stimuli which include the automatically operating mechanism for a real assessment of changes in their quantitative form.

The initial forms of value and market relations were bound up with economic dynamics in the sense that the development of the social division of labour went beyond the framework of the isolated patriarchal community and feudal estate and also beyond the framework of traditional, customary relationships regulated by tradition itself and by the patriarchal or feudal power based on it. But this form of social division of labour remains fairly traditional. A developed market economy is characterised by dynamics, indeterminate production relations, and the necessity of regulating production through value.

In a developed market economy, the economic dynamics violating traditional economic relationships is conditioned by the transformation of industry into applied natural science and by the economic effect of science. While technical progress is based on purely empirical sources, it does not break with tradition, but slowly transforms it without enclosing new, fundamentally unexpected technological methods and relationships, to be taken account of only post factum. The 18th century technological revolution brought in its wake the wide use of machines which were no longer related in design to the artisan's tool. There disappeared the gigantic hammers that were enlarged replicas of the usual hammers. Designs were based on new formulas of theoretical mechanics. Technological progress, the search for new expedient forms, for answers to the question "How is it to be done in order to. . .?", became inseparable from natural scientific statements, from answers to the question "How does it happen that.. .?". But such statements are always broader than the practical tasks that prompted them. That is why technological progress achieved through the application of science, possesses a pecularity that is very important for economic categories: new designs and schemes move from one field to another. We will discuss this effect in comparative detail later on.

294

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

295

Until the middle of the 20th century scientific research as a motive force of technical progress was not subject to systematic economic analysis. It was believed to be like air, as it were, which we breathe without thinking about its value. The development of science was not associated with investments in science, for not being commensurable with the investments in the main branches of production, they did not enter into the structure of the national economy, and economics of science, as will be seen, could appear only in the 1950s.

Science, nevertheless, was the driving force of the technological progress and without reference to its development and application it is impossible to explain the evolution of economic categories. We shall try to show that the modified law of value, first of all the production cost regulating the structure of the national economy, was related to the transformation of the structure, to its dynamic character and, in the final analysis, to the transformation of industry into applied natural science.

The law of value is based on the fundamental necessity of proportional distribution of labour; the regulating role of production costs is founded on proportional distribution of funds. Beginning with the industrial revolution, the increase of funds has been relatively rapid but uneven, the relation of the constant to the variable capital growing at different rates in different branches. The distribution of funds among the branches must correspond to the differences in the organic capital structure, to the ratio of the constant to the variable capital in each branch. This is a most general necessity of each type of production having a different organic structure in the different branches.

The migration of capital caused by the differences in the organic capital structure is in many respects distinguishable from the migration of labour in simple commodity production. We shall touch upon a few of these aspects.

Let us take simple commodity production, stationary in scale and structure, at a moment of balance: market prices correspond to values, everywhere the supply corresponds to the demand, migration of labour is absent. Such a bal-

ance is inevitably upset by purely statistical fluctuations and deviations, often accidental in the sense that they are not subject to macroscopic regularities. These fluctuations give rise to deviations of market prices from values and, correspondingly, to migrations of labour, which in their turn restore the balance. This is the mechanism of conserving the static distribution of labour in simple commodity production. This mechanism explains in what way the balance between production and consumption is maintained in the absence of a direct relationship among the commodity producers, how the balance is maintained in the chaotic play of individual wills.

The origin and change of the given structure itself is not essential here: simple commodity production, as well as consumption, slowly changes its structure, retaining the static structure within a generation.

Now let us take the deviation of market prices from values and migration of capital caused by differences in the organic capital structure or, to use more general definitions, by differences in the capital-per-worker ratio. These deviations and migrations are no longer accidental statistical fluctuations, bearing as they do a distinctly regular dynamic character, and besides, in case of such constant ``macroscopic'' deviations there is no statistical averaged play of individual wills. The macroscopically regular differences in the organic structure are essential in the transition from one branch of industry to another. To use thermodynamic analogies (admissible with any conflict of statistical and non-statistical processes), migrations caused by differences in the organic capital structure are reminiscent not of the motion of selected molecules, but of constantly restored macroscopic gradients which are no longer averaged but change the average values. The play of individual wills in the capitalist mode of production fades out with statistical averaging, which leads to prices coinciding not with values but with production costs.

Such balance is doubtlessly based on the same foundation as the balance of simple commodity production. But the balance here cannot be presented as purely static. What determines the differences in organic structure, or assuming this common foundation, in the capital-per-

296

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

297

worker ratio? Why did the capital-per-worker ratio become higher in the middle of the 18th century in the textile industry than in other industries, why did a similar process take place in the metalworking industry at the end of the 18th and early 19th centuries, and in urban economy at the end of the 19th century? All these depended on the application of science: in the mid-18th century the construction of looms depended on the application of mechanics, at the end of the 18th century and early 19th century the construction of universal steam engines capable of driving rolling mills and metalworking machines depended on the application of thermodynamics, at the end of the 19th century the production and transmission of energy from central power plants to lighting installations and transport depended on the application of electrodynamics. Numerous examples could be cited to illustrate the fact that differences in the capital-per-- worker ratio depend on the transformation of production into applied natural science. At the same time these examples would show the relation of value categories (containing the eventual complications and modifications, i.e., "a wealth of definitions") to the dynamics of production, to the transition from traditional methods to scientifically substantiated and therefore evolving methods.

It might even be possible to show the connection between the modifications of value and the immediate impact brought to bear upon production by the ever more general and fundamental principles of science. The 18th century industrial revolution was, in the final analysis, based on Newtonian mechanics, though not directly, for its immediate moving force was the emergence and development of applied mechanics rather than classical theoretical mechanics. In the 17th and 18th centuries applied mechanics differed from theoretical mechanics in its relatively restricted applications which were of a concrete nature and could not without a thorough modification be transferred to other fields. Hence the comparatively stable distinctions in the organic capital structure that appeared during the industrial revolution. Applied science brought about practical results that caused substantial friction when being transferred to other spheres. Similar comparatively

stable distinctions in the organic capital structure create the difference between value and production cost.

However, classical electrodynamics and the entire physics of the 19th century had a different economic effect. Let us consider such a fundamental discovery as electromagnetic induction. A change in the magnetic field creates an electric field and an electric current, a variable electric field brings about a magnetic field. This discovery, generalised in Maxwell's theory, became the basis of a new picture of the world, of the universal field conception, of a new relationship between Democritus' ``being''---atoms, and his ``non-being''---space. But at the same time electromagnetic induction was the direct underlying principle of the practically applicable constructions, differing in their mode of application, but identical in their physical essence. Already in the 1830s Faraday constructed prototypes of the generator, electric motor and transformer, which stimulated, first, the appearance of electric power stations, secondly, the first current receivers, a new industrial power apparatus, and thirdly, distribution networks. Thus, the impulse quickly moved from energetics to all branches using the electric drive. The generality of the idea immediately realised in production, is proportional to the mobility of this idea, its fluidity, its ability to break barriers between industries. This ability in its turn leads to fairly wide concentres of industries being created with a parallel increase in the organic structure of capital. Arkwright's water frame violated for long the balanced increase in the organic capital structure, whereas electromagnetic induction gave an impetus to such an increase in numerous branches---in power engineering and in all branches which by using electric motors began to mechanise production operations and later to automate them. This could not but affect the mechanism of regulating production through value.

Now let us examine the impact of non-classical science on the organic structure of capital, on the relative capitalper-worker ratio in industries. As an example we shall consider the idea of using fast neutrons in atomic reactors, which has been discussed in the essay on the atom. The transition to fast breeders is possible when a sufficient

298

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

299

quantity of plutonium has been accumulated with the help of slow reactors. Thus, a paradoxical situation specific for non-classical science arises: a slow reactor prepares its own obsolescence which is an essential goal of its work. Similar self-removing assets change the picture of the capital-per-worker ratio. The value of asset obsolescence along with the asset value underlie the production cost. Obsolescence is one of the fundamental economic categories of modern production.

The dependence of the evolution of economic categories on the development of science does not imply spontaneous progress of science. The resources appropriated to science and its experimental apparatus depend on production which primarily determines the degree and tempo of the practical implementation of the scientific discoveries and findings. But the internal logic of science must also be taken into account, i.e. the fact that electrodynamic processes could be detected only after the electric current was obtained, that the special theory of relativity could be created only after the electromagnetic theory of light was evolved, etc. The development of science is neither a spontaneous process nor a passive reflection of material production.

Let us return to the capital-per-worker ratio characterising "production in general". Any production ( excepting the most primitive, simple commodity production with minimal assets) gives rise to differences between the relationship of living labour and funds in one branch and analogous relationship in another branch. Reduction to "production in general" can be applied, mutatis mutandis, to the differences in the capital-per-wroker ratio. In terms of value such reduction was a statement of the different quantities of labour directed to different branches, and a constant elementary necessity that these quantitative differences should correspond to the structure of the requirements. Now it is a question of objective differences in the capital-per-worker ratio, of the necessity for redistributing funds in view of these differences. Reduction to "production in general" does not explain per se the historical forms of value, nor does a similar reduction explain the historical forms of production cost. And as in

the case of value, the transition to the abstract category of production cost requires another very important and proper scientific operation, transition from the abstract to the concrete, a search for "a rich totality with numerous definitions and relations", a search for historical modifications of production cost.

Such search is prompted by the changed productive force of labour, resulting in a new and more dynamic labour productive force. The dynamics explaining the differences in the organic structure of industries are those major scientific and technological shifts that change the capital-per-worker ratio in the given branch without generally influencing the capital-per-worker ratio in the other branches. Integral capital-per-worker ratio is the sum of such dynamic events, a linear function of shifts in the different branches. Integral capital-per-worker ratio proves to be a real category since the average rate of profit depends on it, while the actual realisation of this rate, the actual averaging of the profit results from the migration of capital to branches that have retained a lower organic structure. These migrations of capital---an irrational form of asset migration---are conditioned by the above-mentioned lower organic assets structure in branches outside the ra.nge of the given scientific and technological shift that brought about a higher organic structure in a certain branch. In other words, the average rate of profit is related to the linear dependence of the integral capital-per-worker ratio on the changed ratio in selected branches.

Let us assume that the scientific and technological shift under consideration is not a transition from one construction (applied within the limits of a certain branch or branches) to another or from one technological method to another, but a transition from one ideal physical or chemical scheme to another. In other words, the transformation is not one of technology, but of scientific schemes to be applied in a great number of branches rather than in one branch. Science is distinguishable from its applied uses in that its content are objective statements independent of their application and for this reason applicable (already applied and eventually applicable) in a number

300

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

301

be a symbolic description, nor can it be used in an extended metaphorical sense. It has a direct and exact equivalent ---correspondence between prognosis and the conscious goal. For classical economic liberalism, whose aim was balanced social production, value and production cost could, without reservations, be the basis of optimistic forecasting. It was assumed that the immanent economic laws realise this goal in the best possible way with the least interference on the part of the state, with maximum freedom of trade, absence of import duties, etc. The fact that equilibrium came only after destructive crises was ignored. The optimism of the Physiocrats and Adam Smith was not so much an optimistic statement and optimistic forecast as an optimistic illusion.

Besides, it was static optimism, and one could not expect more from laws that determined the distribution both of labour directed to satisfying the current demands and of funds permitting their satisfaction. Are there laws that include labour and funds in the general distribution, ensuring a change in the structure of consumption, and in all the integral indices of production?

Labour entering into such general distribution ( becoming homogeneous in this sense) creates dynamic value. As concrete labour, it does not aim at producing things and services for consumption, but at obtaining information on new things and services not yet created and different from those created. The content of such labour is not the realisation of the goal but change of the goal and, further, rate of change and its acceleration.

Science provides production not only with means to satisfy the already formulated goals, but also with a stimulus for their change, because, as has already been said, science answers not only questions asked, but questions unasked, too. The answers contain not only information of ``know-how'' and "know where", but likewise dynamic information of "know how to change", not only in the given region but in others as well.

The very essence of science is manifested in the fact that scientific discoveries affect not only the branch of production which stimulated the need for a new physical or chemical scheme. Let us imagine that technical prog-

of branches irrespective of the different technological tasks. What will be the structural effect? How will the production structure change in the event of such a shift, that is, under conditions of scientific progress in its true sense, influencing a number of branches and possibly all the main branches? We shall now discuss this question, the main question of the economics of science, and simultaneously we shall touch upon a particular question: Will a similar changed physical or chemical scheme bring about asset migrations analogous to those happening in purely technological shifts when changes occur in designs and processes restricted to a certain concrete branch?

INTER-BRANCH INFORMATION

Value and its modification, production cost, guarantees the equilibrium of economics, the correspondence between the established structure of consumption and distribution of labour (value) and assets (production cost), in a certain static approximation. The distribution of labour continuously changes and value stabilises it, approximating it to the structure of consumption. Macroscopic changes occur in the capital-per-worker ratio of the industries---the average rate of profit and production cost bring the distribution of funds to a relative balance corresponding to the given organic structure of capital in the different branches.

Can such a balanced structure of labour and funds be called an optimal structure? And, following the proximity of terms, can one consider faith in the realisability of a balanced structure to be an optimistic forecast?

In Part One of the book the term ``optimistic'' was applied to the statement of spontaneous processes of Nature. But this word was used in quotes or with the prefix ``quasi''; for under consideration were quasi-optimistic processes increasing negentropy and counteracting thermal death. But in social processes involving people with their conscious goals and wills the application of ``optimism'' (even in quotes) to evaluate spontaneous processes in society becomes a complex problem. Optimism here cannot

I

302

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

303

ress in a certain branch has entered into what can be termed an "asymptotic zone" in which purely constructive or technological improvements of the physical scheme do not lead to an essential change in the techno-economic indices, the latter asymptotically approaching a certain constant level. In this case a new physical scheme is sought, which requires large investments although the search may prove fruitless. The additional expenses may be made good by increasing the price of commodities produced or lowering the price of the new product associated with the new fundamental scheme. We shall presently deal with such operations that are of importance for " general production". In the meantime we shall exemplify the most essential aspect of the matter. Let us recall the antecedents of the discovery of lasers discussed in the essay "Quantum Electronics" in Part Two of this book. As we pointed out there, in radiotechnical design improvements had not yielded a noticeable effect over a long period of time and did not permit to obtain very narrow frequency intervals so as to prevent radio stations interfering with one another in simultaneous operation. The search for new methods of generating monochromatic coherent oscillations led to a new physical scheme, new in the sense that it utilised previously unknown processes of electron transition to other levels in the radiating atom. What was involved was induced radiation which had its source, as Einstein once proved, in the quantum model of the atom whose nature had not yet been thoroughly studied. The 1950s saw the appearance of such a detailed scheme of induced radiation and the creation of the laser. The laser changed the state of affairs not only and even not so much in radio-engineering as in the energetics and technology of all basic industries. The laser effect may be predicted within a scientific, scientific-technological and economic forecast embracing production as a whole. Any economic calculation ignoring such integral effect of the discovery will generally be incorrect.

Why is it then that transition to a new physical scheme (as distinct from the transition to a new technical scheme implementing the invariable old physical scheme) brings about an integral effect? Why do we believe such an

effect to be a reflection of the fundamental properties of science?

As we indicated above, the answers of science are always broader than the questions posed before it. They are the broader the greater their "inner perfection" and the more fundamental the general principles suggesting the answer in a natural manner with the least number of additional specific suppositions. We are able to predict with a certain degree of probability the effect of such an answer on other branches of production in addition to the branch where the question arose. This is a kind of migration of information from the region for whose benefit the question was posed to another region. Such migration, signifies a generalisation of specific information, and, at the same time a transition from post factum information on the basis of already obtained experimental data to ante factum information, to probabilistic, prognostic information.

The latter comprises economic information on the probable impact the new physical scheme might have on the structure of production. To illustrate migrations of information we will adduce a historical example---the Plan for the Electrification of Russia and the further electrification planning.

The initial idea of the Plan was to unify the power stations and centres of power consumption within a single national power grid. It was the technical implementation of classical electrodynamics of an electric current produced in generators, its voltage stepped up by transformers and transmitted over considerable distances to drive electric motors in centres of consumption. This initial physical idea then acquired technically tangible contours of a unified power grid, transmissions, substations and distribution networks causing technological repercussions in the shape of transformed technology, widely developed electrolysis, application of high electric capacity equipment, in particular electrometallurgy. Reconstruction of power engineering further led to wide use of the electric drive, the joint drive and electric motor shaft and automation based on electric motors.

This resonance effect of the reconstruction of power engineering consisted in forecasts, statements about prob-

304

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISiM

305

able changes in technology and in the power apparatus. That was migration of information, in which it acquired new content, being transformed from information about a unified system of power supply into information about the possible or probable new applications of electricity. In economic terms, the data essential for this information was the reduced cost of the kilowatt-hour in unified power grids using cheap fuel and hydropower.

The reduced kilowatt-hour cost reflected the increased labour productivity in power engineering, and the decreased specific labour input. In a quasi-static structure of production with a given volume of power consumption, such a decrease would have brought about a reduction of labour input and migration of labour from power engineering to other branches. From the point of view of averaging the rate of profit and production cost, i.e. considering the capital-per-worker ratio, the reduced cost of a kilowatt-hour signifies an increased organic structure in energetics, migration of funds to other branches and lower energetics value scales (with the physical scales retained or increased)---absolute in simple reproduction and relative in extended reproduction with a static or quasi-static structure. But electrification was accompanied not only by an absolute but also by a relative §rowth of power engineering. It is to be explained by the ow of information about possible new technological processes, new constructions, new kinds of raw materials, new economic conditions in all branches of production, and in particular by information about the possible electrification of industrial technology. I recall that in planning the Dnieper hydroelectric power station, the one-dam project and the capacity of the station which was very great for that time, were motivated in the project of the Dnieper Complex, an amalgamation of enterprises involving high electric power consumption, with the prospect of developing electric power-intensive processes being derived from the cheap power to be received from the onedam variant.

Value played an essential role, though it was mostly rated index embodying ante factum or prognostic information. In working out the Plan for the Electrification of

Russia and later on, changes in the values were valid even though they were virtual. When the prospect of applying high-tension current lowered the rated value of power transmission permitting the utilisation of distant sources of cheap fuel and energy, one did not wait for the completion of power plants, transmission networks and distribution units. In anticipation of changes in energy cost the construction of power intensive enterprises started with the introduction of the electric drive.

Planned production is based on anticipatory information accumulated and circulated prior to the actual migrations of labour and funds. However, a plan providing for radical shifts in the production structure should take into account the labour input necessary to acquire information which brings about labour changes creating dynamic value. These inputs are now commensurable with the main components of the general balance of labour, their effect being the reconstruction of this balance and transition to a new production structure. The balance of labour now includes a sacrifice to the god referred to by Rousseau.

Now we begin to understand the economic component of the disturbance of the peace of mind wrought in mankind by the legendary creator of science. This time the disturbance is profound. The first attempt consisted in the transition from the traditional to a somewhat more dynamic technology, but the quasi-static structure persisted in face of the differences in the labour productivity of individual producers. These violations caused prices to deviate from values and labour to migrate, thus restoring the balance.

The second attempt was more serious. Now technological progress raised the organic capital structure in certain branches. Those were not averaged deviations but macroscopic deviations of averaged magnitudes. The balance was restored by the migration of labour and funds from branches with a high organic structure, by averaging profits and by bringing prices in line with production costs.

In both cases the mechanism restoring the quasi-static balance was determined by the state of production at the

20---01545

306

PHILOSOPHY OF OPTIMISM

PART THRKK. ECONOMIC CONCEPTION OF OPTIMISM

307

given moment; processes approximating production to balance are functions of the state of production. If the state of production at the given moment is known---albeit not with such accuracy as Laplace's Supreme Reason that knew the coordinates and velocities of all particles of the Universe---then its subsequent state can be defined.

The third attempt is directed at disturbing Man's peace of mind. This time the hostile deity creates non-classical science which influences production as a whole. The effect of science does not consist in isolated violations of the quasi-static structure or those affecting selected branches, but its total transformation. The restless god of derivatives prevents people from restoring the old balance. Information derived from the existing structure, information about this structure and its violations, classical value and production cost and generally information about that which is, no longer determines the further development of production. It is determined by that which will be. Information about that which will be, prognostic information, is built upon a certain set of scientific, and scientific and technological discoveries predictable with a certain degree of probability, and upon their probable economic effect.

Now we are taking leave of the legendary creator of science. Science is created by Man himself. A peculiar feature of our time is that Man invests in science an essential part of his labour commensurable with the basic investments. The result of labour, scientific and technical information, contains distributable and consequently homogeneous labour crystallised in it.

Let us recall the distribution of funds in an attempt to find a dynamic modification of production cost. Modern transformation of the production structure based on nonclassical science, its dynamics characteristic of the atomic age require migration of funds and living labour, which is determined not by that which is, by the present result, but by that which will be, i.e. by prognostic information. The atomic age does not know of an automatically working mechanism to restore the quasi-static structure for the simple reason that quasi-static structure itself no longer exists. Funds and labour migrate now in the opposite direction in relation to the migration of those funds and

labour that level out the rate of profit bringing it to an average, and bringing prices to production costs. The proportion of energetics in the national economic structure is, thanks to the reduced cost of a kilowatt-hour, greater in atomic stations than in electrification based on classical resources, because the flow of information from energetics to technology, the resonant effect of atomic energy, the application of non-classical schemes, general and mobile by their nature, reorganises technology on the basis of quantum electronics and similar innovations, bringing about an increase in the electric consumption of a number of production branches. Additional capital and labour investment cannot be obtained by branches, initiating scientific and technological breakthroughs as a result of establishing an average profit rate, for they are directly opposed to such a result.

Let us consider three forms of regulating production:

(1) elimination of individual statistic fluctuations disturbing the quasi-static balance (value as direct regulator),

(2) elimination of macroscopic violations within separate branches caused by the uneven changes of the organic structure of production (production cost as regulator), and

(3) dynamic regulation. Let us see what are the results of the local price deviating from value in these three cases.

In the first case the accidental fluctuation that permitted the isolated producer to sell his goods during some time at a price higher than its value is generally levelled by competition. If price is designated by P and value by V, the process may be expressed by the formula: (P>V)-*- (P=V). In the second case the balance is broken at P = U: in branches with a relatively high organic structure the price, coinciding with value, brings about an ebb of capital and labour until the price rises to the level of production cost. Designating the latter by PC, we obtain (P = U)-> (P>V; P=PC). In the third case, in the branches that initiated the shift the organic structure may rise, but the structure of production corresponding to the resonances of this breakthrough (for example, the technological resonance of atomic power engineering) may change in such a way that funds and labour will migrate to the initiating branch in spite of the fact that value and production cost work in

20*

308

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OE OPTIMISM

309

the opposite direction. This, however, is not a particular violation of the general law. It is a fundamental regularity of production applying non-classical science and implementing increasingly new engineering projects of one and the same ideal scheme, and, moreover, increasingly new ideal physical and chemical schemes. The economic nature of the additional funds received by the initiating branch is obvious. It corresponds to the redistribution of labour (distributable, homogeneous, creating value), to the apportioning of labour necessary for the production of information passing over to other branches. Additional funds are a dynamic value. But these additional funds (unlike migration averaging profit in branches with a different organic structure) find their way to the given branch not necessarily through higher prices. They may be the result of preserving low prices over a long period (P<PC), which causes expansion of the market, they may be the consequence of special subsidies, and only in certain particular cases can they be the consequence of a rise in prices (P>PC).

An irrational and antagonistic form of dynamic value is dumping (P<PC), monopolist and oligopolist prices (P>PC) and different forms of state subsidies to monopolies, just as migration of capital during crises accompanied by the destruction of productive forces is an irrational form of production cost and anarchy of production is an irrational form of value. The task of the investigator in all these cases is not only to interpret the definitions of "production in general", but to deduce from the development of productive forces the inevitable liquidation of these irrational forms and the transition to rational ones.

Dynamic value means that the structure of society's labour efforts includes labour aimed at the transformation of this structure. This is another component of the Integra? goal of science already referred to. Science aims at the transformation of labour, the life of its subject, at transformation of the content of labour, approximating it to the creative solution of increasingly more profound and general questions, transformation of the object of labour, rationalisation of ecological environment and the natural resources of production. Now we see that in addition to

the subject, content and object of labour, the structure of labour is being transformed too. All these transformations make labour itself more transforming, dynamic and world-changing. The transformation of the structure of labour consists in the fact that the basic constituents of labour structure include labour that creates reconstructing information, the basis for the impact of fundamental research on production overstepping inter-branch boundaries.

FORECASTS OF UNDERSTANDING AND FORECASTS OF REASON

Part One of the book dealt with a very old distinction of understanding and reason: understanding comprehends the regularity, the order in the world; reason is capable of seeing and anticipating the need for transition to a new order. This transition, a function of reason, is also inseparable from the function of understanding: in order to establish a new system, a new law, it is necessary to have some notion about law, regularity, repeatedness, identity, symmetry, about that which Hegel termed "the peaceful" aspect of knowledge, in other words there is a need for the functions of understanding.

Such a conception of understanding and reason could be formulated after the German classical philosophy, after Hegel, and after classical science clearly demonstrated the transitions from certain laws, certain sets regulating the world to other laws, to other regulating sets. This notion acquired a more concrete character when Engels generalised the classical transitions in the teaching about the forms of motion and their hierarchy.

Classical science of the 17th century already showed that reason passes from one intellectual ordering of the world to another. Herein lay the genesis of classical science. In the first half of the 17th century in Galileo's Dialogue and then in Descartes, the world was regulated by a scheme of inertial movements, uniform movements, i.e. sets of identical instantaneous speeds, forming the ratio of the world. Then in the Galilean Discourses and

310

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

31!

still more in the Newtonian Principia the reason of science riveted its attention upon the differences in speed, upon accelerations, and sets of unidentical speeds became the regulating scheme of the Universe, the scheme of the world, its ratio being made up of accelerated movements. This, however, was a transition to a new ``peaceful'' scheme, to a new identity. In the picture of the world drawn in Galileo's Discourses and Newton's Principia, the ratio of the world was made up of uniformly accelerated movements. The harmony of the Universe in Aristotle corresponded to the invariability of the positions of bodies occupying "natural places". In Galileo's Dialogue and in the concept of inertia per se, harmony corresponded to the identity and invariability of velocities---the first derivatives of positions with respect to time. In the Discourses and Principia harmony corresponded to the identity and invariability of accelerations, the second derivatives of positions with respect to time. These identities are constructed by understanding, with reason forcing the transition from one intellectual identity to another.

In anticipation we shall point out that in economic forecasts something similar is to be observed: understanding builds series of identical, invariable characteristics whereas reason passes over to series of greater dynamic characteristics, to time derivatives of a higher order, from levels to speeds, from speeds to accelerations, and possibly further.

Going back to the problems of value and production cost, we see that the law of value, like any other law, is created by understanding or rather by reason that does not overstep here the limits of intellectual thinking. In this sense the notion of production cost as a balance regulator is a conception of understanding. The function of reason overstepping the limits of intellectual thinking is a transition from the concept of value to the concept of production cost.

In the 19th century, reason was no longer limited to transitions to ordered series of higher-order derivatives of the coordinates of bodies, embracing as it did forms of motion other than mechanical movement. In each case, the substance of the law itself governing the phenomena

and the nature of the subordination are of a specific character. Selected molecules submit to laws of mechanics regulating their motions, with absolute obedience, their behaviour precisely corresponding to the laws. On the contrary, the behaviour of large ensembles of molecules submit to their own thermodynamic laws, in the sense of probable prescribed behaviour only: this probability is great where the ensembles of molecules are sufficiently large. Where their size decreases, thermodynamic regulations may be violated, and in case of one, two or three molecules they become meaningless.

Non-classical physics made the behaviour of microparticles indeterminate. The very laws of mechanics and the magnitudes that they regulate---positions, velocities, energies---become within some limits inexact, and a certain violation of the macroscopic law may become the beginning of a new ordered process. Here reason does not confine itself to sporadic transference of understanding to another stage, e.g. from positions to velocities, from velocities to accelerations, etc. Reason constantly watches out for a chance to transform the intellectual peaceful laws of understanding into paradoxical new regularities, permitting a peaceful transition from the rath to the (w-(-l)st phenomenon and prediction of its (ra-|-l)st result if the rath result is given.

Such a relationship between understanding and reason is peculiar not only for non-classical science, but also for the scientific, scientific and technological, and economic forecasts related to it.

As has already been indicated, the breakthroughs involving new physical and chemical schemes not only lead to acceleration in one branch of industry, but also accelerate the growth of labour productivity in other branches, and this acceleration cannot be predicted by extrapolating the past tempo. The effect of one branch on another gives the former the right to the title of a "leading branch", this ``leadership'' sometimes having the effect of fundamentally new developments in the branch "being led". In other words, reason continuously intervenes in the transition from one established intellectual line of development, predictable by extrapolation, to a different line. It is a

312

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

313

matter of the continuous intervention of reason, of a constant possibility of such a transition.

Such is the effect of non-classical science, and the highest form of ``leadership''. There exists a kind of hierarchy of industries in terms of their proximity to non-classical science and the character of ``leadership''. Generally speaking, different tempos have to be envisaged for different industries. The very concept of leading industries involving a general dynamic re-interpretation of economic categories, was advanced long ago. A given industry was awarded the title of leading industry on the strength of its techno-- economic peculiarities: not only does it direct its production to other industries in the form of raw materials, fuel, tools, etc. in order to ensure the established technological processes, it also urges the reconstruction of other industries and the introduction of new technological processes. Such a role was played by electric power engineering, and correspondingly, the title of "leading industry" was awarded to energetics. Economic statics does not know of such a conception, for in stable production the importance of relevant industries is only determined by the relative volumes of the product.

The concept of "leading industry", which is dynamic by its nature, acquires now, as has already been said, a somewhat different meaning. The leading role varies depending on the decisiveness and depth of the shifts in reconstruction to which the given industry ``leads'' the others. In addition to the direct product, the mechanism of ``leadership'' includes not only economic information (for instance, the cost of a kilowatt-hour of energy), but also technical information proper (new designs and technological methods) and scientific and technical information (fundamentally new physical and chemical schemes). Thus, the most dynamic impact on the other industries is made by the leading industry through the flow of interbranch information emanating from it.

Let us begin with industries which influence the rates of development of other industries through information concerning new economic parameters of production, its scale of value and conditions of delivery. At the present time all industries possess such function. New economic

information comes from each industry to influence the rates of development in others. Thus, a system of nonlinear mutual ``leading'', mutual impact has been formed. But among these industries there are certain that are first among equals which accelerate the development not only of their immediate consumers, but a much wider periphery, as in the case of energy in its generalised form, the production of electric power. In this generalised form energy is received not by a narrow circle of consumers (as is the case with other power carriers), but directly by all or almost all industries. For this reason the effect of electric power production differs from the dynamic effect of other industries. Other industries promote the immediate development of the nearest groups of consumers, then (with a considerably smaller effect) they influence, in a mediated way, the rates of more distant groups, to be followed (with a still lesser effect) by the impact on the rates of a third group, with the effect of the dynamic impact fading out rather rapidly. Electricity also possesses a similar multi-stage effect, but here the effect fades out slowly. Besides, electricity produces an immediate dynamic effect on consumers even if their technological processes do not require a large consumption of electric power, i.e. they are non-intensive electric power industries.

For the next concentric group of industries the concept of "leading industry" has a somewhat different meaning; it is also dynamic, but the dynamics are of a higher order. The leading branch having this enhanced "leading role" influences the other branches not only through economic information on changes in the cost of raw materials, equipment, etc. (particular ``leadership'') or energy ( overall ``leadership''), but also through technological information, i.e. by transferring new technical schemes which serve in other branches as a source of new designs or new technological recipes on a classical basis.

That concentric group of leading industries includes, for instance, the production of automatic equipment. The impact here is not reduced to the formula: a certain component of production cost in the branch "being led" will drop to permit the corresponding expansion of the market

314

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

315

capacity and production in this region. The impact here may, for example, approach the formula: "The radical change in the level of automation resulting from instalment of new equipment will permit to proceed to certain changes in technology, to realise certain new more perfect schemes.''

The third concentric circle is a source of new technological schemes on a non-classical basis. The leaders here are atomic power and electronics in the broad sense, including the production of computers and laser-type devices. Here not only engineering solutions but fundamental physical schemes are rapidly changing (fast neutron reactors, semiconductor apparatus, and fundamentally new types of lasers). Influence is brought to bear upon all branches which implement the new fundamental schemes in concrete forms.

The fourth concentric circle involves the building of experimental centres for fundamental research which, in the final analysis, exerts the greatest dynamic influence on production.

It is easy to show that these concentric groups must have different speeds of expansion. If energetics not only ensures the expansion of production among consumers, but also stimulates the transition to a new technology consuming more electric power, it must develop more rapidly than the volume of production in the consuming branches. If a non-classical branch (electronics, for instance) reconstructs power engineering itself, then it must ensure not only the rapid development of power engineering, but in addition the growth of its concomitant `` capacity'' and the increasing requirements in the new schemes. The expansion of fundamental research affects not only the growing production (whose growth rate is the fastest in atomic energetics and electronics), but the effect itself becomes more powerful.

Thus the uneven development of production, the different growth rate of a number of industries ceases to be a kind of fluctuation that is levelled when price approximates value. It also ceases to be a macroscopic process embracing a whole industry, resulting in a deviation of prices from production costs with a subsequent averaging

of profit. In dynamic production, the faster the development of labour productivity in a given branch, the faster the growth of its scale and scope in a number of industries. This is to be explained by the fact that the branch in which labour productivity grows the fastest sends to other branches a flow of restructuring information: technoeconomic information on cheaper energy, technological information on new designs that become in other branches a source of accelerated designing ideas, scientific and technical information on new scientific schemes, new cycles, which become target canons for technological creation due to their generality. Value also remains here a rated quantity to be taken into account in planning since it includes dynamic value, labour materialised in this information. Now the title "leading industry" becomes transitory, but as a rule it is mostly awarded to branches in the greatest measure connected with non-classical science.

How is it possible to switch over from these relationships, that are at best expressed by inequalities, to equations permitting to calculate the volume of investments, the volume of production and inter-branch proportions for the nearest decades? Is it possible to get for each initial condition quantitative forecasts of the production structure for a long span, determining the economic dynamics of production, say, up to the year 2000?

First of all it should be emphasised that the equation determining the dynamics in the quantitative forecast must be based on a certain extrapolation, on the presumption of constancy, invariability, conservation of a certain magnitude, of the existence of a shift invariant with respect to time. It does not mean, of course, that the levels and structures will be retained, or that even the first derivatives---the speeds of their change---will be retained. It follows from the above that the prime hypothesis is the acceleration of economic indices that may be considered constant for the time being.

Up to the present moment the forecast has not gone beyond the limits of understanding, proceeding as it does from a certain law expressed in an equation which is covariant in respect of the shift in time. We still have to determine the coefficient of speed and acceleration for

316

PHILOSOPHY OF OPTIMISM

PART TIIRF.K. ECONOMIC CONCEPTION OF OPTIMISM

317

the volume of the industries being planned, for the proportions of different industries, and for the techno-- economic indices. Which methods of prognostication will permit to obtain these coefficients?

We are not considering a plan yet, but a forecast. In other words, our analysis does not include the definition of goal set by a reorganisation and change in economics. Later, when a certain forecast has been made for each initial condition, primarily for each nearest structure of investments in economy, culture and science, and when a "world line" of economy has been drawn the task will consist in contrasting these "world lines" to ascertain which of them approaches closer to the optimal line, the line that answers the goal of the reconstruction of the economy. Thus the degree to which the prognosis really approaches the goal, the measure of optimism characterising the prognosis, will be determined. The forecast variants will be examined later on, and now we propose to define the criterion for choosing the methods of prognostication.

At the present stage, when we deal neither with a plan nor with forecast variants, such a criterion is the objective determinacy of the prognostic dynamics, its agreement with objective laws. This, in general, is the relationship between purposive activity and objective laws. When choosing initial conditions in accordance with the goal, these objective laws permit to impart an objective, real character to the purposive activity, to plan its results, counting on the attainment of the goal, which is the definition of optimism.

The transition from prognostication to planning is effected by comparing the integrals of speed and acceleration of labour productivity within the period covered by the prognosis. The function of these variables (and labour productivity itself)---the fundamental economic index---is a quantitative equivalent of the integral goal of science, i.e. the transformation of labour, its subject, content, and natural conditions to be transformed by labour. High productivity of labour ensures a rise in consumption, a considerable improvement of living conditions, increased life expectancy, and has a direct bearing on the destiny of the subject of labour, Man himself. Transformation of

labour content depends on the growth rate of its productivity: effective machines to some extent guarantee a certain level of labour productivity, but the rate of change of this level does not depend on effective machines but on the transition to more effective ones, on the tempo of such a transition, on the restructuring, creative component of labour. The second derivative---acceleration---is ensured by the merger of labour and science, by a change in the production not only of designs, but of fundamental schemes as well.

High productivity of labour at the given moment, as well as its derivatives, can be achieved through reckless depletion of unrenewable resources. In his report "In Place of G.N.P.", devoted to the discrepancy between quantitative value aspects and ``physical'' welfare, Shigetsu Tsury cites an example of mass reclamation of coastal bay areas along almost the entire coast of Japan's archipelago to create new factory sites.::" Reclamation is damaging national parks and is crippling in some places fishing industries unique to the region. Said Tsury: "The reclamation which is going on in the Inland Sea is as if to spread the kitchen-wing of one's house into the beautiful garden without making provision for sewerage facilities." Numerous examples could be cited not of excesses but of a general and serious tendency. Tsury says that environmental amenities are not susceptible to quantification, but their dynamics are: dynamics involving rapid depletion of natural resources including industrial resources, in terms of a general classification, manifest in a long-term prognosis a slow-down of the real welfare growth and a drop in the rate or acceleration of labour productivity growth, if these variables or their functions are integrated within a long period of time.

It follows, therefore, that the rational utilisation of resources is guaranteed by a plan based on variation methods, on the choice of the optimal initial structure of production giving the greatest integral of labour pro-

* Documents of the Fifth Soviet-Japanese Symposium of Economists. Institute of World Economics and International Relations, USSR Academy of Sciences, Moscow, 1972, p. 91.

318

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

319

ductivity and its derivatives in terms of the prognostic "world line" of economics.

Let us revert from planning and choosing an optimal forecast to prognostication itself. What are the objective processes underlying economic prognostication?

Marx's sociological conception attributes this role to the productive forces of the society, in the broadest sense, including science and its most general and fundamental sections. Non-classical science and contemporary technology demonstrated most clearly the reverse impact of social development on the productive forces and the latter's prius in this interaction.

The idea of the electrification embodied in the Plan for the Electrification of Russia, 1920, was a synthesis of this conception and the generalisation of that which classical science and classical electrodynamics in particular had achieved by the beginning of this century. It was at the same time a typical synthesis of objective premises, investigation of objective processes and Man's goals, a synthesis of the ``is'' and the "ought to be", necessity and freedom, knowledge and activity.

The Plan for the Electrification of Russia was based on an objective forecast: with the further technological implementation of classical electrodynamics, electrical engineering should permit the transmission of great power over hundreds of kilometres. The Plan was based precisely on a forecast, on a possibility not yet achieved, but possessing a high probability. In the first draft plan, the networks of different stations did not link up, there were no overlapping circles that subsequently appeared on the map of the Electrification of Russia. At that time the possibility of such distant transmission of power was, or may have seemed, problematic for the foreseeable future. By Lenin's instruction the Plan for the Electrification of Russia included overlapping networks---electric power grids (which was the basic idea of the plan!), in anticipation of further progress in electric power transmission that appeared inevitable.

Why was it inevitable, on what was the conviction in the possibility of more powerful transmission based? This conviction followed from the fundamental scheme already

in existence at that time: generator---step-up transformer ---transmission lines---step-down transformer---motor. This scheme followed directly from classical electrodynamics, it was a combination of processes of electromagnetic induction and generation of a magnetic field by an alternating electric current, underlying electrodynamics and the corresponding Maxwell equations.

The given physical scheme was an ideal scheme for designers striving to come closer to the theoretical correlations in destroying real, practically applicable generators, transformers, transmissions and electric motors. The existence of such an ideal scheme, already known and, what is very important, not subject to any modification in the foreseeable future, permitted a high-probability forecast that a certain trend in technological progress will not be changed in the course of decades.

There was a similar relationship between classical, established and constant, ideal cycles, and technical forecasts in branches of production other than electric power development.

Then the Commission for the Electrification of Russia moved from forecasting to planning. If power stations and networks of a certain capacity and extent were built, a unified power grid would be created, power supply and consequently labour productivity in industry and agriculture would increase, freight traffic volume would grow manifold with the coming of electrified main roads, the proportion of electric capacity processes in industrial technology would rise, as would the importance of easily accessible sources of power and raw materials in the balance of resources. These targets of the plan predetermined the programme of investments to be made in the building of new stations, networks, railway lines, mines and plants. The fulfilment of such a programme created initial conditions for the further unequivocal and determined evolution of production.

In economic terms, such an evolution based on the approximation of technology to the constant ideal physical schemes led to a continuous increase in labour productivity and to the realisation of the inequality P'>0 considered above.

320

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

321

Constant ideal physical schemes resulted in immutability of the trend of technological progress which warranted the use of extrapolation as the main method of prognostication. Those were forecasts of understanding; reason, transformation of ideal schemes, participated in them, but its interference which had led to the Maxwell equations was a thing of the past, whereas the new interference promising, for instance, power to be obtained directly without electromagnetic induction, was not an unequivocal and distant prospect.

Thus, the method of prognostication adopted by the Commission for the Electrification of Russia consisted in determining the expedient physical schemes and the ways of their technical implementation, as well as the technical and economic results of new designs and the changed structure of production following from these results.

This method must obviously become the basic method of economic prognostication in the atomic age, forecasting the most probable dynamics of production for, say, the next thirty years. But there is a radical difference here. Now there exist no such purposive, ideal physical schemes, for which stability and invariability could be confidently predicted within a span of decades. In non-- classical science reason changes the co-variant intellectual lines, the lines of extrapolation, practically continuously and not only at critical moments separated from each other by big intervals; if a certain physical scheme is retained for a long time as a purposive target for technical ingenuity creation, all the same, the spectre of a new scheme is looming over it like memento mori, which may cease to be a spectre and become embodied, claiming the role of purposive target of technology and signifying an altogether new dynamics of technical and economic indices and inter-sectoral economic proportions.

Such an implementation has already occurred in thr scheme of atom fission under the impact of fast neutrons; similar interferences of the restructuring reason in the forecasts of understanding may soon happen in quantum electronics to be followed (true, not very soon, but it is hard to say how far off) by the scheme of power transmission under conditions of super-conductivity at normal

temperatures and the scheme of controlled thermonuclear reaction.

These are revolutionary turns. G. Thomson says that applied science leads to reform, pure science---to revolution. This holds true for the change in applied, practically applicable cycles based on invariable scientific schemes (" applied science") and for the change of these schemes themselves ("pure science"). On the whole changes are occurring now practically continuously, since each turn is accompanied by the process of assimilating the new principle, with the turns themselves occurring in different fields. As an integral effect, they bring about a continuous accelerated growth of the productivity of labour, the realisation of the formula P">Q. But how can the coefficient of acceleration be defined, how can something invariable, constant, intellectual in the activity of reason reducible to change, to non-identity, be defined?

There is something relatively invariable here: these are the fundamental correlations of the theory of relativity and quantum mechanics. Basically they can claim the long-standing role of initial principles of science, being in a way purposive targets for new physical schemes: for both breeders and thermonuclear reactions are stages in coming closer to the ideal relationship of E=mci. The change of such fundamental ideals would bring about a non-zero third time derivative of labour productivity to P'">0. But the formula E = mc^^2^^, like other basic foundations of modern science, does not determine the rate of the rise in the scope of liberated energy in nuclear reactions or other transitions from one physical scheme to another.

There is another rather essential difficulty here. While the Plan for the Electrification of Russia was being worked out, more or less intuitive prognoses for the future were used, the method which was later termed ``Delphic''. The Commission for the Electrification of Russia called upon its members and other major specialists to give quantitative and at times qualitative estimates of the forthcoming development of transport, metallurgy, fuel extraction, etc. In the mid-twenties (in making up the estimates for 1925'-1926, so far as I can remember)'this method was termed the "method of expert estimates". It

1/2 21-01545

322

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

323

was also applied later, in particular in the early thirties, during the drafting of the General Plan, not only for making forecasts but also for the selection of the optimal variant, i.e. for the planning itself. To cite an episode characteristic of that time: In 1930 I. G. Alexandrov dictated to the author of these lines a title list of electric power stations for the next 10 to 15 years, which coincides with the list of the most profitable stations among those constructed in the 1930s. I. G. Alexandrov had an intuitive or semi-intuitive confidence, already spoken of, in the considerable development of electric power intensive enterprises, enabling him to draw the project for a colossal industrial complex in the city of Zaporozhye and the Dnieper Hydropower Station in a one-dam variant.

At the present time, however, such intuitive estimates can play but a minor role. The changed physical schemes possess, as has already been pointed out, a greater generality than technical innovations and consequently a greater ability to penetrate other branches, becoming the content of inter-branch information. Intuition is mainly based on experience and knowledge in a particular industry, and does not cover information penetrating into this industry as to new ideal physical schemes applicable in several industries. If the reconstruction of production as a whole were a linear function of scientific and technological progress, a general forecast could be obtained by summing up expert estimates for the various industries. But reconstruction of production is not a linear function: progress in various branches is not only technical but also scientific-technical, based on new scientific data applicable in a great number of industries. Information about these facts migrates from branch to branch, and interaction of scientific and technical shifts cannot be determined with the help of intuition pertaining to one branch only.

How then is the interference of reason in the chain of intellectual extrapolations to be taken account of, how are radical shifts no longer caused by new constructions, but by new fundamental schemes, by new targets of technical creations to be forecast?

There is only one way out. In working out economic plans we compare forecasts corresponding to different

initial conditions, and then we choose the optimal, but optimisation does not end in choosing the optimal variant, in drafting a plan. Obviously optimisation must be multistaged and practically continuous. In chess championships when a game is postponed, the player's seconds play variants of the further game for each possible answering move. In science Man poses questions for Nature, and it answers them, sooner or later. The answer cannot always be predicted. It may reject the given question as meaningless. But it will come, just like the answering move of the opponent in a resumed chess game. It is only necessary that there should be ready forecasts, predictions for technical, techno-economic and structural shifts which will call forth Nature's answer---a new physical scheme implemented by experiment. These forecasts must reveal the impact of the new physical scheme on the level, speed and acceleration of the growth of labour productivity. Then it will be easy to determine the initial conditions essential for the maximum realisation of the new physical scheme, for the maximum increment of the fundamental index Q = /(P, /",?").

ECONOMETRY OF OPTIMISM

Non-classical science demonstrated with extreme vividness the rationalist nature of knowledge which became the triumph of "inner perfection", of logical analysis, linking each experimental discovery with most general principles. But these general principles are not only logical constructions. They also generalise observations, and, what is most important, experiments, the active regrouping of Nature's processes. Possessing "external confirmation" these principles are general pictures of the Universe, its reflections---not only of its static condition, but of its evolution as well. Such rationalism is four-- dimensional, including retrospection and prognostication. Since Man's knowledge is associated with his active, purposive activity, the prognosis is related to his goals, and with sufficient correlation, it becomes an optimistic prognosis.

21*

324

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

325

Rationalism was always related to mathematics, and 17th century mathematics proved to be the trend of rationalist thought which joined experimental natural science to become a classical science, a synthesis of experimentally based rationalism and experiment revealing the ratio of the world.

17th century science no longer restricted itself to a purely logical opposition of the different objects and events, as was the case, for example, in the Aristotelian theory of motion which distinguished only "the natural place" of the body, where it rests, and its being outside the "natural place" from where the body seeks to move away. Now motion became a continuous process to be considered from point to point and from instant to instant. Kepler wrote: "Where Aristotle sees a direct opposition between two things without intermediate links, I consider geometry philosophically, finding an opposition filled with intermediate objects, and therefore where Aristotle has one term `another' I have two terms: `more' and `less' ".*

Indeed, as distinct from peripatetic science, contemporary science generally uses continuous sets having between each two objects intermediate ones, the difference between these two objects being primarily expressed by distance in terms of spatial positions, and ``distances'' in more complex sets.

Modern science groups objects, events, phenomena in ordered sets where one phenomenon or object regularly follows another and the differences between them seem numbered as it were. A single-valued relation is further established between the sets: an element of one set corresponds to that of another set (to the state of a moving particle---its velocity, acceleration, etc.) The following concepts are introduced: that of abstract n-dimensional space in which an event or object is defined by n-- coordinates; that of distance---always defined by a positive magnitude characterising two objects, and that of metrics permitting in one way or another to define the distance

between objects---points of w-dimensional space---by the difference of their coordinates. Thus, the assumption of the world ratio being made up of continuous motions found adequate expression in the mathematics of variables, in the analysis of infinitesimal quantities, in analytical geometry, in differential geometry and, in particular, in the metric concept---the measuring of distances by the given coordinates.

Mathematics of variables was an adequate expression of the picture of the world that emerged in the 17th and 18th centuries in which the generally continuous motions of bodies under the influence of inertia or force were an all-embracing explanation of the world order. An echo of this tendency was Kant's formula: in every science there is as much science as there is mathematics. This formula seems close to the contemporary role of mathematics, but, in fact, the modern significance of mathematics for science (and not only for science) finds few analogies to its significance for the mechanical natural science of the 17th and 18th centuries.

In the 19th century, the scheme of the continuous motions of bodies preserved its role of the most simple, initial and, in this sense, fundamental scheme of the Universe. But the complex laws of higher forms of motion no longer permitted the explanation of Nature's processes to be reduced to this simple scheme. Accordingly, mathematics could not claim an essential contribution in the explanation of chemical processes, still less of biology and social sciences, which in no way deprived these sciences of their scientific character.

At the present time non-classical science radically changed the position of mathematics as well as its content. Mathematics is no longer an abstract scheme of the simplest mechanical laws of the Universe. Neither is it an abstract scheme in the old pre-Hegelian sense. The highest abstraction in mathematics becomes most clearly the highest concreteness. In modern non-classical mechanics, in the relativist and quantum conception of the motion of a material point, mathematics no longer finds a simple physical equivalent. The motion and existence of a material point turned out to be the most complex problem

* Johannis Kepleri astronomi opera omnia, Frankfort-on-- theMain, Vol. 1, 1858, p. 423.

32P

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

327

concerning the structure and existence of space. We begin to look on mathematics as an abstract reflection of the highest concreteness in which the structure of space is inseparable from the infinitely complex structure of being. Accordingly, mathematics penetrates all parts of this complex structure, regions which once seemed inaccessible for mathematics due to their complexity. Our age witnesses the emergence of a new synthesis of science, and not only science but all human practical activity that is based on applied mathematics.

Thus it appears that there are no grounds for an a priori or conventional explanation of the genesis of mathematics. In its most general, most fundamental principles, mathematics finds something non-a priori capable of modification, depending on experiment, and very distant from the image of eternal truths of science. In the transformation of classical rationalism into classical science, mathematical concepts that became the most general concepts of reason, assumed the status of ontological truths. The famous definition of Russell to the effect that "mathematics is a science that does not know what it speaks about, and it does not know if what it says is true", (this logical independence allowed mathematics to grow into a powerful tool of modern science) has now become somewhat archaic: mathematics including its most general and fundamental branches speaks of the world and says something that can be confirmed, refuted, or modified by experimental knowledge of being.

Hence the role of physical intuition in mathematics. A. N. Kolmogorov stated that a modern mathematician himself masters the physical essence of a problem, trying to find for it an adequate mathematical language."' This tendency is becoming more and more apparent and general, it is characteristic not only of mathematical physios, but also of fundamental problems. "At the present time," says P. S. Alexandrov, "signs point to a new turn of the eternal question of the interrelation between theory and practice in mathematical thought: new fields have arisen in mathematics in which it is impossible to draw a hard-

and-fast line between the mathematical and the physical aspects."*

A peculiar feature of non-classical science is that `` secularisation'' of logical premises, the elimination of "a hardand-fast line between the mathematical and the physical aspects", the fusing of logical deduction and physical intuition and the physical experiment, have no a priori limits, embracing even the foundations of mathematics that change under the influence of intuitive guesses about possible applications or under the influence of the experiment in application. The word ``application'' itself changes its meaning. The application of mathematics radically transforms the fundamental principles of science, the style of scientific thinking, thus transforming civilisation.

In the essay on the prospects of cybernetics we said that the transforming impact of applied mathematics on civilisation includes transformation of the character of labour: mathematised science and computer-oriented management will permit to change the content of labour, to increase its creative, reorganising potentials. Thus, mathematisation lies in the mainstream of science leading to the realisation of the integral goal of science, i.e., the transformation of the subject, content and object of labour. In this part of the book, the integral goal was said to include a further task, the transformation of the structure of labour, its directions, its distribution into branches, which is a purely economic task. What is the significance of applied mathematics for the attainment of this goal?

Obviously econometry comes into play here. The introduction of metric concepts, methods of measurement, mathematical analogies into economic thought will hardly •change the form of economic analysis without transforming its content and conclusions. Of course, a knife and fork do not change and in any event will never replace a beefsteak, and it is better to have the beefsteak without a knife and fork than make do with the latetr objects for

* Uchoniye zapiski MGU (Scientific Papers of Moscow University), No. 91, 1947, p. 27.

Uspekhi matematicheskikh nauk, 1951, No. 3, Issue 6.

328

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

329

dinner. But the example of the crane and the fox shows that sometimes the meal becomes inaccessible without forms corresponding to its content. This, generally speaking, follows from Goethe's poem addressed to Albrecht von Haller ("Nature does not consist of a shell and a nucleus"). In the case of econometry, the content of the mathematical form stems from the following considerations referring, as a matter of fact, only to one side of the matter, to the relationship between the econometric content of prognostication and non-classical science.

The unstable character of this science, as we are now aware, turns into the immediate source of economic changes not only the application of physical schemes, their technical embodiment in constructions but the very schemes themselves. Due to their relatively general character, the physical schemes simultaneously transform many branches of production or freely migrate from one branch to another. That is why the prognoses of each industry branch or even big enterprise must contain information about production as a whole.

Since the non-linear nature of non-classical economic prognostication rules out the possibility of getting a general prognosis by summarising specific ones, it is essential that each particular prognosis contain information about production as a whole. This information primarily concerns the production structure, and its dynamics as a predicted result of each major discovery which is to change the technical and economic indices and proportions of industries. But structural information is metric information. It must enter in the forecasts data about proportions of different branches measurable in principle, and include in plans the absolute volume of investment in branches and absolute indices of their effect. For this reason, the realisation of the predicted economic shifts is conditioned by the possibility of their econometric expression. Accordingly, forecasts for the development of econometry itself (and mathematics as a whole since it has to use new algorythms not yet obtained) are a necessary condition or part of economic optimistic forecasts.

These optimistic forecasts are motivated by the goal set by the society, rendering the activity of the society pur-

posive. These forecasts possess certain coefficients of correlation with the goal, permitting to choose the optimal forecast that has come closest to the goal. In this case the goal itself must possess some metric coefficient. In its turn, optimism, faith in attainment of the goal, possesses a metric coefficient, a probability of realising the goal.

Here we wish to digress somewhat from the metric problems of prognostication. Is it possible in general to express emotion (and optimism, whatever epistemological, scientific and prognostic, economic and econometrical sense might be ascribed to it remains an emotion) in quantitative indices? Is this question not related to Salieri's wish that music be reduced to algebra and even then it is not music but that which is expressed by music and cannot be expressed otherwise, for instance, in words?

Saint-Exupery put a very interesting remark into the mouth of the Little Prince. To the child the interests of adults seem strange: adults are interested in quantitative definitions, they must know how old someone is, how much he earns, but they are indifferent to which games he likes to play. Science, and not only science, requires some approximation to ``childish'' interests. Einstein acknowledged this in the case of science, and outside science it is expressed by the Evangelists who put the formula into the mouth of their hero: ". . .if you do not be as children. . .". But children, as Alice in Wonderland has shown, do not actually dislike counting: only counting for them must be paradoxical. Precisely such a transition from the traditional mathematical correlations to the paradoxical was realised in the theory which Einstein considered the outcome of ``childish'' interests. He said that he had arrived at the theory of relativity because he had retained a childish interest in fundamental problems until he was old enough to do something about solving them.

The theory of relativity is based on the transition from the traditional Euclidean correlations to paradoxical nonEuclidean ones regarded as a physical transition, on changed metrics, on the non-Euclidean character of the metrics of .space identified with the gravitational field. Such a transition was not the dissolution of music in algebra; it was rather the transformation of algebra into music, anal-

22-01545

330

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

331

ogous, in a somewhat metaphorical sense naturally, to Kepler's "music of the spheres''.

The emotional content of optimism is inseparable from its metrical expression. Faith in ithe forthcoming realisation of a goal is impossible without quantitative calculation and since the structure of production is involved without metrics.

But we are facing here the following difficulty. Metrics, embracing all methods of defining distances by differences of coordinates, can probably be easily introduced for events that can be presented as points in a certain abstract n-di- mensional space.

We wish to remind the reader: the essay on `` KnowHow'' and "Know Where" introduced n-dimensional space of economic structures and (w+l)-dimensional space of the dynamics of these structures. If, for instance, 50 branches (w = 50) are involved, then the point corresponding to the given structure is a point of the 50-dimensional space of structures defined by 50 coordinates, each of which measures, say, investments in or production of one of the branches. The transition from one structure to another is. measured by the vector connecting two such points. The structural changes evoked by scientific and technical discoveries are the basic economic effects that must be measured in order to find out which structure dynamics of production is optimal for the goal to be achieved, ensuring, on the whole, the greatest labour productivity and its derivatives ---speed and the acceleration of its level. Such vectors make up the curve of the structure dynamics (no longer in a 50-dimensional, in general no in an rc-dimensional, but in (w+l)-dimensional space: in addition ,to w-structural coordinates we introduce time, an (n+l)st dimension. Such a curve, the world line of structure, must yield the greatest fundamental index Q = /(P, P', P").

The further direction of this curve can be predicted if the curvature of the world line is supposed to remain invariable. And even if it changes and different correlations appear between the speeds of selected branches and different dynamic balances, it is possible to determine the resulting curvature of the world line and predict further evolution of the structure. But such a possibility is valid

when the changed rates in selected branches are caused by technical discoveries bringing about accelerated expansion of certain branches. We called prognostication for such an expansion "prognoses of understanding". And what about prognoses of reason? These more radical prognoses change the very dependence of economic dynamics on coordinate increments, and on the changed structure, altering the formula that connects each infinitely small increment of the vector in (n-\-l) space, with infinitely small increments of the coordinates. Such changed metrics can be presented as the curvature not of the world line in (n+l) space, but as the curvature of space itself.

Certain explanations will be given here by way of physical and economic analogies. Since the words "physical and economic analogies" have been written, some comments should be made about their acceptability.

They existed in classical political economy. The 18th century economic theory, sometimes in a concealed form but more often openly, introduced conceptions of classical physics: force, balance, impulse. Now such semantic convergence involves non-classical conceptions of variable indeterminacy and others, including the curvature of space.

Let us imagine the motion of a particle in a space with violated Euclidean metrics, or rather in a non-Euclidean curved space. We wish to examine the impact of space curvature on the motion of the particle and consider the dependence of this motion on certain given fields, for instance, the motion of an electrically charged particle in the vicinity of a strong charge. For this we take a total derivative defining the change in the motion of the particle at the given point (for example, the acceleration of the particle), and subtract its derivative to show how the position of the particle changed under the influence of space curvature. This difference, called covariant derivative, is a measure of the changed motion that does not depend on the curvature of space.

The general theory of relativity identifies the gravitational field with space curvature (four-dimensional space and time). The change in the direction of the vector caused by gravitation is not a change within space, but a change together with space. It is possible because the gravitational

22*

332

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

333

field acts uniformly on all bodies, as though not the behaviour of bodies changes in space but the properties of space itself. The gravitational field goes, so to speak, beyond the limits of that whic hoccurs in a given space. A covariant of the derivative can be used for the given space to analyse the covariant relationship between impacts on the body and its behaviour.

Similarly, considering the effect of the most radical scientific and technical discoveries not as a change within space, but as a change together with space, a change of space itself, we can examine the changes of the economic structure that do not violate the covariant dynamic balances (consequently permitting a dynamic extrapolation).

As we have seen, the effect of such discoveries is universal, it has an immediate bearing upon other branches. This permits to draw an analogy between the field of the most radical structural changes in production (the source of this field are non-classical branches) and the gravitational field. Let us revert to the n-dimensional space of economic structures referred to earlier. Motion in this space presents a transition from one structure to another. The acceleration of a certain branch may result in changed direction of such motion. Such local accelerations are to be found in technological progress, in technical discoveries based on invariable physical schemes. The situation gets more complicated in production that has a non-- classical scientific basis, whose sources of structural shifts appear to be the radical changes in the purposive, ideal physical schemes. In this case it is no longer possible to determine the resulting dynamics of the structure as easily as with specific accelerations in separate branches caused by technical discoveries. Scientific discoveries possess a great penetrating force causing a resonance far beyond the branch where they first received a constructive or technological implementation, and the effect of such resonance, far from diminishing, may increase. Anyhow, the resulting change of the structural dynamics depends here in a different way on what is happening in selected branches. The metrics are changing, a change occurs in the dependence of the resulting length of vector A» in (w-|-l)-- dimensional space on the increments of its coordinates &x\,

..., Axn. In other words, the metrics of (w+^-- dimensional space changes causing the space to curve.

Such a concept of the effect of non-classical science requires a number of new econometric constructions. A systematic and comparatively exact definition of certain economic concepts of differential geometry, in particular the concept of compendency, is necessary in order to link the effect of non-classical science with the curved space structures, and the effect of technological discoveries on a relatively invariable scientific basis, with covariant derivatives. So far, we have limited ourselves only to an analogy between the universal character of the gravitational field, the physical basis of its similarity to the curvature of a spatial and temporal continuum, and the universal character of the fundamental shifts in production connected with the migration of the ideal physical schemes from branch to branch. We shall show without touching upon econometric constructions, essential for the explanation and realisation of such an analogy, that such an analogy opens up the way for the application of prediction based on extrapolation under conditions of radically changed dynamic balances. We introduce in extrapolation coefficients of a different nature corresponding to the more general and radical transformations of the production structure. They are by no means corrective coefficients. The latter correct the curve in a given space and the dynamics of the indices in a radical transformation left out of account. Here we deal with fundamental coefficients characterising the curvature of space structures.

What is the relationship between the conception of nonEuclidean space of economic structures and the covariant differentiation, and the conception of dynamic value? How is the realisation of the dynamic value expressed in the space structure, how does the dynamic balance change under the influence of scientific and technological information?

By value we understand the general (corresponding to the concept of production in general) basis, i.e. distribution of labour. This is reflected in the space of structures. Realisation of value, migration of labour constitute a shift in the space of structures. This shift can be associated with

334

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

335

the balance of production, static, quasi-static or dynamic, predicted in extrapolations. Such a shift can be considered as motion in a given (with a given curve) n-dimensional space of structures. The shift caused by the migration of radically restructuring information from one branch to another that realised the dynamic value, can be regarded as the result of the curving of n-dimensional space of structures.

In this connection we wish to remind the reader of certain points already covered in this part of the book. At the beginning of the book it was stated that the axiomatic method is necessary for the theory of economic dynamics. Accordingly, we gave most general definitions to production in general to show the necessity for a production structure corresponding to consumption, inherent in any formation. Then we passed over to concrete, historically conditioned forms of this general feature of production, to the rich totality with numerous definitions and relations. As production forces develop, violations of the balance assume at first the character of individual deviations regulated by the law of value bringing production to a macroscopic balance; later they assume the character of macroscopic deviations of production costs from values, and, finally, acquire the character of a radical and practically continuous change of the production structure. Regulation of economics is no longer reduced to restoring the balance violated by individual or macroscopic differences in the organic structure of funds. The driving force imparting to production this total dynamics is the transition not only to new constructions and methods but to new ideal schemes. Information about such shifts affects, on the strength of their generality, a number of branches.

The sources of such an impact are the leading branches (leading in the sense of accelerating the technological progress)---non-classical branches of production, primarily atomic powers and quantum electronics.

Now we have reached a point where it is possible, although in a most preliminary way, to proceed from the axiomatically constructed system of categories to more concrete methods of prognostication. The scheme of concrete methods does not enter into axiomatic prognostication, but

the most general, axiomatic definitions must make it possible to pass over to concrete prognostication, to econometric dynamic categories of economy, to mathematical concepts and methods corresponding to the two-fold combina^ tion of radical transformations (guaranteeing P"~>0) associated with scientific progress, and to the less radical transformations associated with the new engineering implementation of the ideal physical schemes themselves. The former affect all production, the latter, a specific branch. It may be suggested that covariant differentiation and the concept of curved non-Euclidean re-dimensional spaces as a whole can serve as the mathematical apparatus to reveal such demarcation. In this case the following scheme of prognostication becomes explicit.

The initial method is dynamic extrapolation. We proceed from the demographic forecast assuming that the invariant will be the second time derivative from labour productivity. This permits to obtain diagrams of the growth of the national income and other total indices. In the ensuing extrapolation of the dynamic balances and the relationship between branches, the relations between classical and non-classical concentric groups are taken into consideration. The extrapolated curves do the correcting with the help of corrective coefficients obtained from the optimisation of consumption. Further, as a result of expert estimates additional corrective coefficients are introduced. Another series of corrective coefficients is introduced to bring the forecast in agreement with the criterion of the growing profit of production. Thus ends the first stage of prognostication when use was made of covariant derivatives.

The initial step of the second stage is the scientific forecast made with the help of epistemological criteria. Science continuously examines the accumulated experimental material to determine the possible ways of theoretical thought, imparting "inner perfection" to the material. Economic prognostication requires a summary of scientific literature having a particular concrete and well-grounded prognostic component. This summary is subject to individual and collective checking and must, in the final analysis, include certain hypothetical indications as to new physical,

336

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

337

chemical, bio-physical and bio-chemical schemes and cycles which can radically transform production and its structure. Now special investigation of a physical, economic and, in general, natural and scientific and economic genre may begin with the researcher moving from hypothetical physical, etc., .schemes to a more hypothetical technical realisation of these schemes, and to more hypothetical economic calculations of the impact this realisation makes on the structure, investment, proportion of investment and value. Such investigation results in coefficients of transformed calculation and techno-economic comparisons as well as transformed structure curves and integral indices of production. The results of the first ``covariant'' stage of prognostication are presented as curves in an (n+ ^-- dimensional space, whereas coefficients of these transformed curves are parallel both to the components of the fundamental metric tensor, and the tensor made up of its derivatives of the (n +1) -dimensional space curve of the structure dynamics. An (w+l)-dimensional space corresponding to the economic forecast, becomes non-Euclidean in those of its parts that come under the impact of scientific and technical events, such as the wide utilisation of breeders, the emergence of universal industrial lasers, or a new generation of control units for universal industrial application.

Conceptions of space curvature and covariant derivative are peculiar to the econometry of optimism and to the entire optimistic philosophy of the atomic age. They permit to include in the economic forecasts the effect of fundamental science, its unprecedented dynamism, radical shifts in power engineering, in technology, in the character of labour and environmental conditions. It is, however, only an illustration and a specific example of a rather general tendency, of a rather general relationship between modern applied mathematics and an optimistic world-outlook.

It has already been mentioned earlier that mathematisation of economic calculations introduces an element of credibility without which an optimistic mood cannot be converted into a scientific calculation and the latter cannot become a mood, i.e., an expression and condition of human happiness.

A trend most essential for the destinies of modern civili-

sation acquired a pronounced character in modern mathematics. It is not a new trend, it has existed for a long time but now it has become incomparably more apparent. This trend can be termed ``structuralism'', ``integralism'' or something else, but though revealing certain aspects and shades, these terms do not cover the essence of this trend. In order to solve a number of important, possibly most important physical as well as economic problems, the entire path and the entire ensemble of events connected with the given event, are to be characterised, the analysis not being restricted to movement from one point to another, from one instant^^1^^ to another, from one local event to the next one.

This tendency is encountered in most different fields. In the early forties Fyneman and Wheeler explained quantum mechanics by integrals of whole trajectories, not by references to a particle staying in a given spatial point at a given moment. But even earlier the mathematical apparatus of quantum mechanics had included ideas about the transitions from one function to another and an evaluation not of local importance, but of functions as a whole. A most general tendency was the analysis of structures ascribing certain features not to separate individuals, but to whole structures.

Mathematics developed very powerful methods of variational calculus permitting to compare the world lines of particles and integral features, to find optimal ones among them. The theory of integral equations, operators, the correspondency between functions were developed. In terms of mathematical -conceptions, our epoch is most obviously characterised by the development of functional analysis which unites from a single viewpoint different methods of the integral comprehension of being. The most characteristic physical idea is the relationship of the physical individual, of the elementary particle, to the Universe; the concept of the particle as a focus of interactions in the Universe, and at the same time the concept of the Universe if not as a particle, then, at any rate, as an object with distinct integral features.

A similar integral or structural tendency in the evolution of econometry is obviously related to the philosophy of

338

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

339

optimism. Something analogous to the bio-genetic law can be detected here: the ontogeny of econometry repeats the philogeny of mathematics as a whole. Originally differential equations were in the foreground, then came integral equations, methods of variational calculus, tensorial and functional analysis.

Whether a curve is taken to show a certain process defining with the help of variational calculus the maximum or minimum curve; or a structure is considered, characterising the correlation of elements particular to it; or a vector is given possessing a certain direction, certain combination of components---in all these cases mathematical thought is naturally associated with the optimal curve, with the optimal structure, with the optimal tendency. In econometry, in mathematical investigation of purposive activity, the conception of the optimum is no longer associated with the quasi-purposive concepts of "purposive function", etc. but with the real goal, with that which distinguishes Man from Nature. It is here that the conception of the optimum is naturally related to the concept of optimism as a coefficient of the correlation between objective processes and the goal.

What is the relationship between the modern role of functional analysis in econometry and the philosophy of optimism?

It lies, first and foremost, in the transition from forecasting to planning, or rather, in the transformation of prognostication into an element of planning, in the comparison of different prognoses, and in the choice of the optimal -one, in the corresponding transformation of variational problems into basic econometric tasks.

The criterion for choosing the optimal variant is its maximum optimism, the maximum coefficient of the correlation between the forecast and the integral goal of production and its transformation.

This is a dynamic goal. It does not consist in a specific local state of production. Approximation of this goal is not reduced to achieving a certain level of production and consumption, but to a certain speed and acceleration of this level, a certain dynamics, a certain world line of production.

Thus, a quantitative, metric comparison of the prognoses compares numerical magnitudes, corresponding to the different curves, and different functions. In other words, the iunctionals of the "world lines" are compared. The phonetic and semantic proximity of the concepts ``optimism'' and ``optimisation'' acquires a metric meaning: the former concept has a metric equivalent in the index of the correlation between forecast and goal, inherent in each prognosis, the latter---in the maximum index which is a fundamental economic index depending on the level of speed and acceleration of labour productivity.

These remarks refer to the econometry of optimistic economic and social prognostication, because the radical changes in the dynamic progress, following from the universal application of non-classical science, realise social ideals incompatible with the exploitation of man by man and with the elemental character of social laws. Free labour, its transformation into genuinely creative, reconstructing activity, precludes a class structure of society. A radical change in the structure of labour, in the structure of production, ensuring not only the highest possible level of labour productivity but its continuous acceleration, can only take place in a planned production.

The modern synthesis of the two trends---the intensifying current from natural science to economy and the mobile, dynamic, flexible and fundamental principles of science---manifests itself most distinctly in the role played by mathematics in the scientific and technological . revolution of the mid- and late 20th century. This role was already described at the beginning of the essay, and the further remarks about econometry are only a specific illustration of a more general tendency. With regard to natural science, mathematics seems, at first glance, to be a different source of scientific, technical and economic transformations. The role of mathematics expresses the mobility of general, as well as logical and mathematical, principles of science, peculiar to our century, which realise the philosophical conceptions advanced much earlier. But, as has already been said, modern mathematics acquires ontological value in its most fundamental logical turns, becoming a science of being, lending physical content to its

340

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

341

most general, mathematical conceptions.

In the light of this tendency, certain trends in contemporary scientific thought that seemed originally independent of and even contradictory to one another, appear to be uniform, logically and historically linked and, possibly, identical. These trends include the general systems theory. The latter appeared in opposition to the explanation reduced to answering the question: "What does it consist of?", an explanation which was sometimes termed `` elementarism'' or "concept of elementarity". The search for elementary ``bricks'' that make up being, claiming the role of a universal scientific method, was rejected by the idea of system, entity, organicism, based on biological concepts. "The science of entity and organicism par excellence---- biology---is called upon to play in our world-outlook such a. role that it had never played before," wrote L. Bertalanffi in 1932.*

But the general systems theory was not a repetition or a mere continuation of the philosophy of Entity that had been developing for centuries. Nor was it a defence line protecting the specific biological organicism and entity from the threat of mechanicism. Quite the reverse, it was the start line of an attack, claiming to embrace inorganic nature, the traditional territory of mechanicism. It was implied by the word ``general'' in the name of the new scientific trend. But following this path, general systems theoryhad to come closer to the structural analysis or, as it is sometimes termed, the structural approach to the world,, and metrical categories.

The correlation of the entity and the individual, or the local, received a very general and promising treatment in mathematics. The idea of entity was an ontological presumption of the functional analysis which studies curves, corresponding to certain kinds of functions, in other words, it considers integral objects, determining their functionals. The most vivid demonstration of the physical equivalents of the functional analysis, imparting ontological value tothe latter, are the ``prognostic'', by no means unequivocal, conceptions of elementary particles, their very existence as

a result of interactions embracing great systems, the Metagalaxy included. Contemporary physics rejected the concept of elementarity not only in its mathematical methods, but in the ontological constructions themselves---this was discussed in the essay "De rerum natura". As for economic thought, Marx's theory rejected the concept of elementarity, social atomistics, modes of production a la Robinson Crusoe. The teaching about commodity fetishism and abstract labour most clearly shows the relationship between the social nature of economic categories and integral metric definitions. Abstract labour is a homogeneous, distributed and consequently quantitatively defined labour. Its quantitative determination expresses the distribution of the labour efforts of the society---the structure of production. Value can as little be defined by the ``inner'', individual properties of commodity, as mass, energy, and charge of a particle can be defined by the particle's own nature while the macrocosm and the force fields are ignored. In economics the relationship between the quantitative definitions and the systematic, integral conception of production is becoming now more obvious and substantial. We shall dwell only on one contemporary problem, the correlation between the metric character of economic categories and environmental tasks of production.

Shigetsu Tsury took the remark of the Little Prince mentioned earlier about the quantitative character of adults' interests, as an epigraph to his report already referred to: "In Place of G.N.P.". The main idea of the report is the unmetric character of modern criteria of production and, first of all, environmental criteria. Tsury, as we have seen, cites a number of facts and ideas pertaining to the negative environmental effect of production in a "system characterised by individual striving for maximum profit, which, to all intents and purposes, is becoming ever less capable to meet the task of using in the best way that which is given by Nature''.

We are convinced that the ecological problem cannot be consistently systematically and effectively solved in a similar system. It can only be solved in a planned, socialised production. But in this case the importance of metric categories increases. Ecological criteria are not reduced to the

* L. V. Bertalanffi, Theoretische Biologic, Bd. I, Berlin, 1932, p. 5.

342

PHILOSOPHY OF OPTIMISM

PART THREE. ECONOMIC CONCEPTION OF OPTIMISM

343

question, "How much?" (in Saint-Exupery---"How much, does he earn?"), they rather refer to the question: "How balanced it is, how does the given correlation, the given structure approximate the optimum?" This integral, structural, systems question requires a metric answer, but of a more complex nature. The optimal dynamic structure is expressed by a curve in TZ-dimensional space structures, by the "world line" possessing the maximum functional that measures the realisation of the goals of production including the ecological goals. It is possible to show the close and fundamental relationship between the new ecological criteria of production and econometric criteria, and of the planetary and age-long calculation of resources and ecological values with the fundamentally metric character of economic thinking. As a matter of fact, this is a connection and not a contrast. In principle, ecology should not be regarded as an unmetric criterion, for its metrics exist, though they are non-traditional, and give rise first of all,, to differential correlations measuring the rate and acceleration of the scientific and technological and economic progress. But the definition of differential correlations could be regarded, rather archaically as "finding the tangent". In any case, this term characterises in a metaphoric sense, naturally, the style of modern economic thought which defines the "here and now" situations by their distant effect. The more economic thought focuses on dynamic problems and differential indices, the closer it studies the future, the distant results of modern economic and scientific and technological progress including to an ever greater extent the ecological results, the closer it investigates the integral results and indices of economic dynamics.

What is the relation of these integral indices to optimism?

We wish to make a preliminary remark about the concept of being in modern non-classical science. As far as the individual is concerned, this concept is connected with the individual enclosed in an entity and interacting with entites. For this reason the concept of being can be more intensive or less intensive, it can be greater or smaller, depending on the intensity of the interactions, on its inclusion in a system with a greater or smaller macroscopic struc-

ture, ordering, negentropy. But transition from an intensive concept to a metric one, to a measured intensity of being, is a necessity here. It is now possible to speak about an optimal metric structure, about certain macroscopic gradients, about initial conditions promising an evolution of the system, to be predicted in advance, about a functional characterising this evolution. It is precisely the metric character of the initial structure and the possibility of measuring its result, its effect, its realisation in dynamics, in the future that lie in the basis of modern optimism.

Indeed, modern optimism is faith that objective processes will realise the goal set by Man. This faith is based on the fact that obejctive processes ar edetermined with some exactness by the initial conditions prepared in advance by Man's purposive activity, that this activity penetrates a new field creating noozones there---the initial store of negentropy. Purposive activity signifies the determination of the initial conditions of a future evolution suitable for each structure, and a corresponding functional, as well as an optima! initial structure conforming with the maximal purposeful teleological functional in the forecast.

In order to show the more general metric foundations of modern optimism and Man's planning activity, we have substituted the more general concepts for the familiar economic and econometric concepts of the fundamental economic index Q = f(P, P', P"} and the initial structure, i.e., the distribution of resources among the branches of production and science. We wished to show the fundamental relation of optimism to metrics and, in the final analysis, to the modern integral and differential style of scientific thought, to the inclusion of the individual in ever larger systems, not eliminating individuality, but transforming both the individual and the system. Alongside other processes determining the progress of the scientific and technological revolution, the ``current'' from natural science, from its generalising processes, leads to the realisation of the goals of science and optimistic forecasts, to a rational transformation of Man, the character of his labour and the ecological environment.

REQUEST TO READERS

Progress Publishers would be glad to have your opinion of this book, its translation and design and any

suggestions you may have for future publications.

Please send all your comments to 21, Zubovsky Boulevard, Moscow, USSR.

Professor Boris Kuznetsoy is a philosopher, physicist, historian of science and economist He is President of International Einsteinian Committee. Some of the many books he has written have been translated into English: Einstein (Moscow, 1965), Einstein and Dostoyevsky (London, 1972), Reason and Being (Boston, 1974).

The Philosophy of Optimism, published in Russian (Moscow, 1972) and in French (Brussels, 1972), has been revised and supplemented for this edition. The book deals with problems of philosophy, the content and effect of modern non-classical science.

343-1.jpg