CAIRN-INT.INFO : International Edition

1Operating systems are programs that handle a computer’s hardware resources—its processors, memory, inputs, outputs, and so on—while providing users with an interface to the machine. Concealed behind this technical role are non-trivial ontological operations—whether “ontology” means, as in computer science, the conceptual structure of a domain or, as in philosophy, a theory of being. By ontology we mean everything recognized as existing within a given context, and the structures that allow this to be described and manipulated.

2What are these non-trivial ontological operations? The operating system must comprehend both the ontology of the hardware—consisting of a processor (within which are instructions, registers, operands, addresses, etc.), memory, various inputs and outputs—and also one or more ontologies of human representations consisting, for instance, of cursors, commands and arguments, mice, windows, and icons. These ontologies are ways of thinking about how computers work, and are expressed in different languages—typically machine language, on one hand, and the languages of actions and textual or graphical statements on the other. The operating system’s source code provides a translation between the heterogeneous ontologies of the mechanic and the ergonomist.

3But far from taking place directly between these two languages, this translation usually passes through a third language, one idiomatic to the operating system itself. The ontology this expresses, which can be considered a virtual machine (Krakowiak 2014), does not overlap with those of either the hardware or the interface. For example, the “processes” the operating system manipulates do not exist at the hardware level, and users do not usually deal with the concept except when system administration is needed—that is, when one descends into the ontology of the operating system. Similarly, the concept of “file” which the operating system uses does not generally match users’ intuitive understanding of the term: many would be surprised to learn that some “files” cannot be written to disk. [1] The “middle way” ontology of operating systems is expressed in a system of concepts that can be described for itself and which—even if it is immersed in another system through its source code—is not strictly determined by it. Those who create operating systems are granted, and make use of, substantial conceptual freedom.

4The significance of these ontologies goes beyond technical considerations. Considerations of intelligibility are often also at play. For philosophers, an operating system’s source code represents the ideal case of a system of concepts which includes fundamental concepts such as act, object, time, and space—all defined quite explicitly in the language of another ontology, itself unambiguously defined. An operating system’s system of concepts can therefore be studied just as structural historians of philosophy, following Martial Gueroult, have reconstructed the architecture of philosophical systems.

5Elsewhere, we have suggested that the file-process duality in Unix can contribute to metaphysical discussions of the relationship between action and object (Mélès 2013). In the present article, we examine the concept of time in the Unix operating system, showing on the one hand how it differs from that of the machine and, on the other, how it is connected to Unix’s distinctive concept of action, the process.

6We begin with a discussion of the temporality used by the synchronous machines defined by the EDVAC report of 1945. We then show that the temporality of Unix is distinctive by examining the source code of Version 6.

Time in synchronous von Neumann machines

7To discuss Unix’s temporality, we must first explain what it is distinguished from. The dominant model of this contrasting temporality is the “von Neumann machine” defined by the “First Draft of a Report on the EDVAC” in 1945 (von Neumann 1945, section 2). The report begins by presenting the main elements of the machines being described. They must have a central arithmetical part (CA) and a central control part (CC). These deal with the algorithmic work to be done, performing arithmetic (addition, subtraction, multiplication, division, and perhaps calculating square roots, logarithms, etc.) and moving through algorithms by making mechanical decisions. The machine must also have a memory (M)—i.e. a storage space for the initial, intermediate, and final data of the calculation, the instructions of the program, arithmetic tables, and so on. Finally, there must be inputs (I) like a keyboard or punch cards, outputs (O) like screens and printers, and a buffer for inputs and outputs (R). These five ingredients are, apparently, all that is needed for a von Neumann machine.

8But is that true? There is clearly something else required: before describing the concrete application of the machine to different basic algorithms (particularly the four fundamental operations of arithmetic), the report devotes three times as many pages to the problem of time in automatic calculators, describing two ways—which it calls asynchronous and synchronous—for machines to regulate their own rhythm, and so “time itself.”

9Under the “asynchronous” approach, the machine regulates itself “autonomously, by the successive reaction times of its elements.” Each of these operates at its own rhythm, and so “all stimuli must ultimately originate in the input” (von Neumann 1945, section 4.1): when operating asynchronously, the machine’s time is that of its parts themselves, which depend in turn on the temporality of the stimuli provided as input. This is an essentially external temporality, one that imposes itself: time is dictated by the environment, constrained by the speed of the machine’s own parts. This is the time of the experimental physicist, who establishes extrinsic correspondences between a series of observed physical phenomena, on the one hand, and on the other a series of phenomena that appear on the instruments used to measure time. In such cases, the phenomena being measured have need for chronometers: a body will fall even when the clock is broken. In asynchronous machines of the sort described in the EDVAC report, calculations follow a temporal path that is analogous to physical processes, and which is independent of any possible instrument for measuring time.

10John von Neumann recognized that the asynchronous approach would be useful for “slow” machines—i.e. those executing between 100 and 1000 operations per second—even if executing several operations at once would require a corresponding increase in the space occupied by parts of the machine. For instance, to execute two additions simultaneously, one would have to duplicate the components used for addition so that the two operations would not collide. For very fast machines, however—like those at the time which used vacuum tubes, and whose calculation frequency was on the order of megahertz, a million operations a second—there was less need for the asynchronous approach. [2] Time was not a scarce resource for such machines, and so it was less expensive to allow moments of latency if, in return, these allowed for greater predictability:

11

Alternatively, they may have their timing impressed by a fixed clock, which provides certain stimuli that are necessary for its functioning at definite periodically recurrent moments. This clock may be a rotating axis in a mechanical or a mixed, mechanico-electrical device; and it may be an electrical oscillator (possibly crystal controlled) in a purely electrical device.
(Section 4.1)

12Under the synchronous approach, time is no longer dictated by the rhythm of the machine’s parts. From a chronometric point of view, the machine might even waste time, if that is the price of guaranteeing that every intermediate operation is finished before moving from one set of “simultaneous” operations to the next.

13Having presented the choice between a centralized and a decentralized rhythm, von Neumann opts for the first: “If reliance is to be placed on synchronisms of several distinct sequences of operations performed simultaneously by the device, the clock-impressed timing is obviously preferable” (section 4.1). The pace of the machine’s activity must be set by periodic pulses: “We propose to use the delays τ as absolute units of time which can be relied upon to synchronize the functions of various parts of the device” (section 6.3). And so a sixth type of component must be added to the machine, distinct from those to do with calculation proper that have been described so far—i.e. the arithmetical and control units, the memory, the buffer, the inputs, and the outputs. This extra component is the clock.

14

The central clock is best thought of as an electrical oscillator, which emits in every period τ a short, standard pulse of a length τ′ of about (1/5)τ – (1/2)τ. The stimuli emitted nominally by an E-element are actually pulses of the clock, for which the pulse acts as a gate. There is clearly a wide tolerance for the period during which the gate must be kept open, to pass the clock-pulse without distortion.
(Section 6.3)

15The clock’s name is deceptive: even if, viewed from the outside, its operation is based on the regularity of a physical phenomenon—typically the oscillation of a quartz crystal when an electric current passes through it—it does not tell the time, as though its job were simply to measure it. Rather, it tells the machine what time it is. From the machine’s point of view, this is not a component that reads the time, but writes it.

16The choice between synchronous and asynchronous machines involves far more than the technical considerations von Neumann set out. It offers two conflicting conceptions of the relationship between time and activity: one in which events take place independently of the measuring tool, and one in which that tool exercises a causal influence on how the activity takes place. We could draw a comparison with a runner who sometimes uses their watch to time themselves, and sometimes to check when it’s time to go shower. Or we might think of the mutual lack of comprehension between a speaker at a conference, focused on the time their topic intrinsically demands, and the chair who is, increasingly obviously, looking at their watch—an instrument for measuring time quite extrinsic to the talk. As labels for these two ways of using time, we will speak of passive and active clocks. As we can see, these two conceptions of the relationship between time and activity go far beyond the world of computers under discussion here.

17The synchronous approach is still predominant in the industry, partly because of physical considerations about the design of parts of the machine. [3] The processor performs calculations far more quickly than memory can be read or written; that, in turn, is generally faster than reading from or writing to inputs and outputs. These components must interact in strict alternation. One must be able to place the value recorded at some particular point in memory in some register of the processor; store in memory the value contained in some register of the processor; consult in memory the next instruction to carry out; write the result contained in the buffer to output, and so on. If these different actions were allowed to unfold in their own time they would collide almost immediately. The results would be unpredictable from the point of view of computer science; they would take a physicist to explain.

18Even if the processor is often “idle”—i.e. if it is sometimes inactive—the synchronous approach allows us to avoid such a catastrophe. And so our computers typically have a quartz crystal oscillator that, decrementing a counter until it reaches zero, periodically emits pulses, each of which signals that a certain set of basic operations are to be carried out “simultaneously” (or at least close enough that this appears to be so on the temporal scale in use). [4] A three gigahertz processor can carry out three billion instructions per second. This simply means that, at each pulse, the machine reaches a stable state—like the children’s game “Red Light, Green Light,” where the players have to stand still when the person who’s “it” turns around, but can move freely when their back is turned. In synchronous machines—as in a broad class of activities more generally—a clock divides up physical time to regulate an activity, which is itself broken up into basic operations.

19Working at the level of the processor lets us clearly show the link between temporality and activity. But we thereby restrict the concept of activity to that of the basic operation. What happens to time when we look at a different scale of activity?

Time and processes in Unix

20We will examine high-level computer time using the source code to Version 6 Unix (Unix 1975)—an “old” system, certainly, but one that already contained the primary concepts that still structure a large branch of operating systems, including Linux and the BSD family.

21This version of Unix was the first to be released outside AT&T, and for many reasons played an important historical role. First, it described a conceptually elegant operating system that implemented many of the ideas that had been developed in the early 1960s for CTSS and Multics, both multi-task, multi-user operating systems (Ritchie 1978; Walden and Van Vleck 2011). Unix’s first designer, Ken Thompson, worked on the Multics project until his employer, Bell, left the consortium in 1969. His team—Rudd Canaday, Douglas McIlroy, Joseph Ossanna, and Dennis Ritchie—may have lacked funds, but they at least had a free hand to implement his ideas (Ritchie 1984).

22The second reason for the success of Version 6 is that it was written for DEC’s PDP-11/40, PDP-11/45, and PDP-11/70 (1972–5). These were versatile, inexpensive computers used widely in American universities. Anti-trust law meant AT&T, which was in a monopoly position, was barred from commercializing Unix; Version 6 could therefore spread freely across universities and research centers (Lazard and Mounier-Kuhn 2016, 173–5). The encounter between Bell Research Laboratories and the university community provided all the material for an “exciting sociological tale” (Salus 1994, 1 and passim).

23Finally, Version 6 of Unix was popularized by John Lions, who, in 1977, “wrote a commentary on the UNIX source code of the type normally reserved for the works of Chaucer or Shakespeare” (Tanenbaum 2009, 718). This commentary circulated sub rosa for many years, and is still an essential tool for anyone examining the structure of Unix in detail (Lions 1996).

24These are the reasons we have chosen to study the Unix approach to time using Version 6. Nothing gives a better insight into the ontology of an operating system than the list of its “system calls”—that is, the ways in which user mode can access kernel mode. As Marc Rochkind rightly says, “system calls define what UNIX is” (Rochkind 2004, xi). Each system call gives the user a way of tugging at the kernel’s sleeve and asking for its permission to carry out an action—knowing full well that the kernel alone decides what users are allowed to do.

25So what are the system calls about? To find out, we can examine the file sysent.c in the Unix source code. This contains a table associating each system call with the routine defining it in files like sys1.c and sys4.c. Note when reading that the remarks between “/*” and “*/” are comments, meant for human readers and left unexecuted by the machine. The file has a very simple structure: it is made of a table of integers which, at line 2910, is given the name sysent. As the initial comment says (ll. 2907–8), each line of sysent gives the number of arguments expected (0, for instance), the address of the function to call (for instance, &nullsys—i.e., the address held in a variable called nullsys), and a comment indicating the name and number of the system call (for instance, indir for system call 0). The system calls are numbered automatically, beginning at 0. Let’s see what a quick glance can already show us about the list of system calls, and so about the ontology of Unix.

tableau im1

26We can spot in the list the system calls related to processes: exec and fork to create processes, setuid and getuid to manage users’ permissions for a process, nice to define scheduling priorities, kill to send a process a signal, and so on. Others relate to the file system: unlink deletes a file, chdir changes the directory, chmod modifies file permissions, chown changes a file’s owner, and so on. So far, everything seems to conform to Andrew Tanenbaum’s division of system calls into “roughly . . . two broad categories: those dealing with processes and those dealing with the file system” (Tanenbaum and Woodhull 2006, 20). But in both Unix and Tanenbaum’s MINIX, which derives from it, the expression “roughly” conceals those system calls that deal with time, like stime and gtime. For the operating system, time seems to be a notion as fundamental as those of action and object. [5]

27But what sort of time is this? Does it resemble a clock case, like clock time in a synchronous von Neumann machine?

A typology of clocks

28To understand time in the Unix kernel, we must distinguish between three clocks our computers may contain (and these days typically do contain) (Bovet and Cesati 2007, chapter 5).

29The first, called the TSC (Time Stamp Counter), is the active quartz oscillator clock described above. This allows for the low-level synchronization of operations. A second, called the RTC (Real Time Clock), gives the date and time; a battery allows it to keep working if the power fails. This is the passive clock that counts physical time. The third, which we will now turn our attention to, is called the PIT (Programmable Interval Timer). It is an active clock governing high-level synchronization—no longer at the level of the machine’s elementary operations, but at a higher degree of abstraction, that of the operating system.

30The PIT is programmed to periodically trigger an interrupt—i.e. to send a specific signal to the kernel. The period is set in the header files of the clock driver, the part of the source code describing how the system is to interact with the clock. The period is consequently not, in general, the result of a material constraint, as with the TSC, but a software decision. In param.h, which defines the system’s constants, a section is introduced by the comment

tableau im2
0128: /* tunable variables */

31It does not, as we might think, contain genuine “variables”—i.e. values that may change during execution of the program. Instead, it contains arbitrary constants, [6] including the value

tableau im3
0147: #define HZ 60/* Ticks/second of the clock */

32This dictates the frequency of the signals sent by the programmable clock, which was probably chosen empirically. [7] As can be seen, the time scale of 10-2 seconds is of an entirely different order of magnitude to that which regulates the processor, at 10-9 seconds. Operating system time is slow time. The computer’s activity, like that of human beings, is structured by different degrees of temporality.

33At the high level of the operating system as well as the basic level of the synchronous von Neumann machine, the clock is one of the very few genuinely indispensable components of the computer. We said above that a broken clock does not stop a body falling; as we will see when we look at the function main(), defined in main.c, things are very different in Version 6 Unix. This very short function, reproduced below, describes the heart of the operating system in a hundred or so lines—whitespace and comments included.

tableau im4
1533: * Initialization code. 1534: * Called from m40.s or m45.s as 1535: * soon as a stack and segmentation 1536: * have been established. 1537: * Functions: 1538: * clear and free user core 1539: * find which clock is configured 1540: * hand craft 0th process 1541: * call all initialization routines 1542: * fork - process 0 to schedule 1543: *- process 1 execute bootstrap 1544: * 1545: * panic: no clock -- neither clock responds 1546: * loop at loc 6 in user mode -- /etc/init 1547: * cannot be executed. 1548: */ 1549: 1550: main() 1551: { 1552:extern schar; 1553:register i, *p; 1554: 1555:/* 1556:*zero and free all of core1557:*/ 1558: 1559:updlock = 0; 1560:i = *ka6 + USIZE; 1561:UISD->r[0] = 077406; 1562:for(;;) { 1563:UISA->r[0] = i; 1564:if(fuibyte(0) < 0) 1565: break; 1566:clearseg(i); 1567:maxmem++; 1568:mfree(coremap, 1, i); 1569:i++; 1570:} 1571:if(cputype == 70) 1572:for(i=0; i<62; i=+2) { 1573: UBMAP->r[i] = i<<12; 1574:UBMAP->r[i+1] = 0; 1575:} 1576:printf("mem = %l\n", maxmem*5/16); 1577:printf(’RESTRICTED RIGHTS\n\n’); 1578:printf(’Use, duplication or disclosure is subject to\n’); 1579: printf(’restrictions stated in Contract with Western\n’); 1580:printf(’Electric Company, Inc.\n’); 1581: 1582:maxmem = min(maxmem, MAXMEM); 1583:mfree(swapmap, nswap, swplo); 1584: 1585:/* 1586:*set up system process1587:*/
tableau im5
1588: 1589:proc[0].p_addr = *ka6; 1590:proc[0].p_size = USIZE; 1591:proc[0].p_stat = SRUN; 1592:proc[0].p_flag =| SLOAD|SSYS; 1593:u.u_procp = &proc[0];1594: 1595:/* 1596:*determine clock1597:*/ 1598: 1599:UISA->r[7] = ka6[1]; /* io segment */ 1600:UISD->r[7] = 077406; 1601:lks = CLOCK1; 1602:if(fuiword(lks) == -1) { 1603:lks = CLOCK2; 1604: if(fuiword(lks) == -1) 1605:panic("no clock"); 1606:} 1607:*lks = 0115;1608: 1609:/* 1610:*set up’known’ i-nodes1611:*/ 1612:1613:cinit(); 1614: binit(); 1615:iinit(); 1616:rootdir = iget(rootdev, ROOTINO); 1617:rootdir->i_flag =& ~ILOCK; 1618:u.u_cdir = iget(rootdev, ROOTINO);1619:u.u_cdir->i_flag =& ~ILOCK;1620: 1621:/* 1622:*make init process1623:*enter scheduling loop1624:*with system process1625:*/ 1626: 1627:if(newproc()) { 1628:expand(USIZE+1); 1629:estabur(0, 1, 0, 0); 1630:copyout(icode, 0, sizeof icode); 1631:/* 1632:*Return goes to loc. 0 of user init1633:*code just copied out.1634:*/ 1635:return; 1636:} 1637:sched(); 1638: }

Constituting a space

34The first task of Unix’s main function is to initialize and free memory—i.e. the space available to take data. [8] This is split across two places: core and swap memory. The second of these is the section of mass memory reserved as an extension of the core.

35The initial segment of main memory is taken up by the running operating system, and it would clearly be disastrous to initialize it. A space of size USIZE should also be reserved for the ID of the owner of a particular process (l. 1560) that we will find later. [9] To initialize the main memory (coremap), the main()function takes as its starting point the block at the end of the space occupied by the operating system (*ka6), to which USIZE octets are added. This point is indexed by the variable i:

tableau im6
1560:i = *ka6 + USIZE;

36For as long as it can read the contents of the first word of the chosen block (using fuibyte(), l. 1564)—that is, as long as the index refers to an existing memory location—it erases the contents of the block using clearseg() (l. 1566), increments the counter showing the maximum size allowed per process (maxmem++, l. 1567), and adds the initialized block (of size 1) to the list of available memory. It then proceeds to the next block:

tableau im7
1568: mfree (coremap, 1, i); 1569: i++;

37After a few lines of code specific to the PDP-11/70 (ll. 1571–5), the function displays the size of the memory available, followed by a legal notice (ll. 1576–80). It then shows the maximum memory allowed for each process using the MAXMEM constant (defined in param.h). In summary, to initialize memory, Unix starts by checking that the target location exists, erasing its contents, and updating the list of available memory blocks. It is only after this procedure that the operating system knows, and can display, the size of the main memory. We strongly recommend that the interested reader try this out on a PDP-11 emulator. [10] The size of the memory is shown, then the rights notices, and finally the command prompt (the character “#”):

tableau im8
mem = 1042 RESTRICTED RIGHTS Use, duplication or disclosure is subject to restrictions stated in Contract with Western Electric Company, Inc. #

38The free portion of the main memory (coremap) has now been initialized, and main() now adds the whole of the swap (swap) to the list of available memory. The swap contains nswap blocks, the first of which is indexed by swplo:

tableau im9
1583:mfree(swapmap, nswap, swplo);

39The first task of the main function of Unix is to reinitialize the entire memory—with the exception of its initial segment, which is reserved for system administration—and to update the list of available memory.

40The operating system’s ontology is provided with a space as a matter of priority. But can we speak here of a real change in relation to the underlying ontology of the machine—where, after all, memory is also a space? The properties of this space have indeed been changed profoundly. Before these few lines, the memory space was a terra incognita, its size unknown and its contents unpredictable. The operating system knows only two things about it: the distinguished point, namely the initial position indexed by

tableau im10
1560:i = *ka6 + USIZE;

41The second is the fact that it possesses a discrete, unidimensional, and finite topology, guaranteeing that the whole of the space will be traversed by incrementing i as long as possible. The system knows only the very general topological properties of this uncontrolled space. But after executing these few lines of code, not only are this space’s contents under control (since they have been explicitly reset), but we possess a complete list of it, which is currently composed of two large free zones, the main memory, and the swap. Portions of this can now be allocated to each process’s stack, up to a maximum of maxmem. In other words, the machine’s purely topological space–still undefined and unexplored—has given way to the metric space of the operating system, one that has been measured and signposted.

The prime mover

42With a space now under control, the system proceeds to the second main task: initializing process 0, the operating system’s fundamental process and the ancestor of all the others.

43We readers of the source code know that, from the machine’s point of view, this process is created—in this case, by calling main(). But from the operating system’s internal point of view, it looks precisely like an Aristotelian “prime mover” (Aristotle 1936, chapter 5). This is the action that sets off all the others without itself being moved by another process—something both Aristotle and Unix must posit the existence of, as both inscribe causal processes within a tree structure, and refuse to allow recourse to a truly infinite temporal regression. The tree must have a root.

44This prime mover has all the characteristics of an action in general—in this case, of a process: it has an owner descriptor and a process descriptor. As mentioned earlier, in main() (l. 1560), the user identifier comes immediately after the space in memory occupied by the kernel (*ka6). A space is reserved for it, its size set by the constant USIZE:

tableau im11
1589: proc[0].p_addr = *ka6; 1590:proc[0].p_size = USIZE;

45The status of process 0 is set as “ready to run” (SRUN). Two flags are set, one indicating that the process’s user identifier is in main memory (SLOAD), and the other that the process should never be relegated to swap (SSYS):

tableau im12
1591:proc[0].p_stat = SRUN; 1592:proc[0].p_flag =| SLOAD|SSYS;

46Finally, the process identifier is associated with the user identifier:

tableau im13
1593:u.u_procp = &proc[0];

47All of the information about process 0, particularly its permissions, have now been given. Actions in Unix are exclusively processes, and in launching the system the first action worthy of the name has just been carried out. It is henceforth impossible, in the system’s internal language, to return to a point higher up in the process of creation.

Unix’s own time

48At this stage, Unix’s ontology involves a space that has a metric and an action that, from the system’s point of view, has every characteristic of a prime mover. The third major task is to establish the operating system’s temporality.

49Unfortunately, we need to make some remarks at this point about hardware. [11] The PDP-11—the only computer Version 6 was designed to run on—could have two types of clock: either a KW11-L line frequency clock, regulated by the electric current, with the physical address 777546, or else a KW11-P programmable real-time clock, accessible at the physical address 772540. [12] Unix must find out as quickly as possible which clock it is using. The address of the two possible clocks has already been given, allowing for translation, [13] in main.c:

tableau im14
1509: #define CLOCK1 0177546 1510: #define CLOCK2 0172540

50We have to discover if at least one of these two clocks is available, and which one. The system begins by storing the address of CLOCK1 in the pointer lks:

tableau im15
1601:lks = CLOCK1;

51If it cannot read the value stored at this address using fuiword(), it falls back to CLOCK2:

tableau im16
1602:if(fuiword(lks) == -1) { 1603:lks = CLOCK2; […] 1606:}

52If it also fails to find the second clock, it triggers a panic:

tableau im17
1604:if(fuiword(lks) == -1) 1605:panic("no clock");

53Such a situation is extremely rare: the panic() function, which blocks the entire system, is only very rarely found in the Unix source code, which was designed to work in the most extreme conditions. In other words, the clock is one of the very few components that is absolutely essential for the system’s operation. If either clock is present, it is reset by the instruction

tableau im18
1607:*lks = 0115;

54This sets its value to the octal number 115. [14] From this point on, the system is equipped with a temporality, its rhythm set by regular interrupts.

55The operating system’s general conditions of possibility have now been assembled: a metric space, a prime mover, and a structured temporality. The system can begin normal operation, a move from the metaphysical to the physical. It is on this basis that Unix initializes the file system (l. 1609–19), creates the initialization process—process 1, the parent of all the normal processes (l. 1627) [15]—and launches the infinite loop that schedules processes:

tableau im19
1637:sched();

56All these operations—crucial to the operating system, and the only ones accessible to the user—are possible only on the basis of the three transcendentals (space, prime mover, and temporality) that precede them.

Periodic operations and priority levels

57But precisely how is temporality used? After all, “[a]ll the clock hardware does is generate interrupts at known intervals. Everything else involving time must be done by the software, the clock driver” (Tanenbaum and Woodhull 2006, 206). The clock driver—the program responsible for the kernel’s interactions with this clock—describes the operations triggered by each interrupt.

58The driver is defined by the clock() function in clock.c. [16] The process is always untimely, interrupting another—whether this is process 1, some early instance of the clock driver, or another process entirely. In the driver code, the interrupted process is given the generic name ps.

59The beginning of clock() is executed at priority 6. No other device is granted such urgency. Nothing, in other words, is more urgent than time (Lions et al. 1996, 11–1a). The driver is thus able to almost instantly reinitialize the clock (whichever one is being used) with the command

tableau im20
3730:/* 3731:*restart clock3732:*/ 3733: 3734:*lks = 0115;

60We have already seen the first occurrence of this in main() (l. 1607). The system then updates the PDP-11/45 and PDP-11/70 system console (l. 3740). It executes a small number of functions that take absolute priority, all of them to do with displaying characters on the terminal. [17] These functions and their arguments are stored in an array of structures named callout, defined in systm.h. A function that has 0 ticks to wait must be executed immediately. In all other cases, the number of ticks remaining is decremented with each tick. The table callout must contain the list of functions to be executed, their arguments, and the number of ticks remaining for each:

tableau im21
0253: /* The callout structure is for a routine 0254: * arranging to be called by the clock interrupt 0255: * (clock.c) with a specified argument, 0256: * within a specified amount of time. 0257: * It is used, for example, to time tab delays 0258: * on teletypes. */ 0259: 0260: struct callo 0261: { 0262:int c_time;/* incremental time */ 0263:int c_arg;/* argument to routine */ 0264:int (*c_func)(); /* routine */ 0265: } callout[NCALL];

61An important part of the clock driver is devoted to priority management. We will examine the following fragment in detail:

tableau im22
3742:/* 3743:*callouts3744:*if none, just return3745:*else update first non-zero time3746:*/ 3747: 3748:if(callout[0].c_func == 0) 3749:goto out; 3750:p2 = &callout[0]; 3751:while(p2->c_time<=0 && p2->c_func!=0) 3752:p2++; 3753:p2->c_time--; 3754: 3755:/* 3756:*if ps is high, just return3757:*/ 3758: 3759:if((ps&0340) != 0) 3760:goto out; 3761: 3762:/* 3763:*callout3764:*/ 3765: 3766:spl5(); 3767:if(callout[0].c_time <= 0) { 3768:p1 = &callout[0]; 3769:while(p1->c_func != 0 && p1->c_time <= 0) { 3770:(*p1->c_func)(p1->c_arg); 3771:p1++; 3772:} 3773:p2 = &callout[0]; 3774:while(p2->c_func = p1->c_func) { 3775:p2->c_time = p1->c_time; 3776: p2->c_arg = p1->c_arg; 3777:p1++; 3778:p2++; 3779:} 3780:} […] 3787: out:

62As long as the list is not empty (ll. 3748–9), the clock driver performs the high-priority functions by decrementing the number of ticks for each function listed in callout (ll. 3750–3). It can then move on to a slightly less urgent operation, which is executed at priority 5 (l. 3766) as long as the process ps which the current clock driver process interrupted does not take precedence over it (ll. 3755–60). We then execute the functions with 0 ticks remaining (ll. 3767–80). In short, the system considers at regular intervals the entire set of waiting tasks and carries them out in order of priority, with temporal tasks given absolute priority.

63The driver—which still has high priority, either 5 or 6—then performs other time-related tasks. It increments the counter that measures the time taken by the process ps. This is either u_utime or u_stime, depending on whether the process belongs to user mode (UMODE) or kernel mode:

tableau im23
3788:if((ps&UMODE) == UMODE) { 3789:u.u_utime++;[…] 3792:} else 3793:u.u_stime++;

64Finally, as long as the process ps does not take priority over the current clock process (ll. 3798–9), the system time (measured in seconds) is updated. If the tick counter is greater than or equal to the value HZ (i.e. the number of ticks per second) once it is incremented (lbolt++), at least a second has passed. We can therefore subtract from this counter the value HZ of the period, and add 1 to the count of seconds elapsed since midnight, January 1, 1970:

tableau im24
3797:if(++lbolt >= HZ) { 3798:if((ps&0340) != 0) 3799:return; 3800:lbolt =- HZ; 3801:if(++time[1] == 0) 3802:++time[0];

65So the passive “clock,” which tells the time, is updated using the active clock, rather than the other way round. There exists a great watchmaker, one of whose primary tasks is to move the hands of the clock each second.

66Once these fundamental temporal tasks have been completed, the remainder is far less urgent. It is only executed at priority 1:

tableau im25
3803:spl1();

67The time passed since the penultimate tick may be greater than 16.7 or 20 milliseconds (depending on whether their frequency is 60 or 50 Hz) and, consequently, the low-priority parts of several successive instances of the clock driver may find themselves jostling each other. But this is unimportant for the system, which has received the absolute guarantee that the high-priority tasks will be executed at the right time. The others can be carried out at their own pace, in the order they arrive in, whenever the system is ready for them. The worst that will happen are little inconveniences: a drop in the reactiveness of the console, out-of-date timing information about inactive processes. The same is true when we come back from holiday, or emerge from a particularly heavy work period.

68The processes that have been put to sleep for a period are reawakened at a snail’s pace (ll. 3804–5), followed by those that only need to be executed when the last two bits of the second counter are 0, i.e. every four seconds (ll. 3806–9):

tableau im26
3804:if(time[1]==tout[1] && time[0]==tout[0]) 3805:wakeup(tout); 3806:if((time[1]&03) == 0) { 3807:runrun++; 3808:wakeup(&lbolt); 3809:}

69Temporal information for all processes (ptime, p_cpu, p_pri) are updated in a similarly casual fashion, one by one:

tableau im27
3810:for(pp = &proc[0]; pp < &proc[NPROC]; pp++) 3811:if (pp->p_stat) { 3812:if(pp->p_time != 127) 3813: pp->p_time++; 3814:if((pp->p_cpu & 0377) > SCHMAG) 3815:pp->p_cpu =- SCHMAG; else 3816:pp->p_cpu = 0; 3817:if(pp->p_pri > PUSER) 3818:setpri(pp); 3819:}

70So there are two kinds of task the clock driver performs periodically, clearly distinguished by their level of priority. Some urgent ones are executed almost instantaneously. These have very high priority and occur at the beginning of a function, which makes it unlikely that another process will interrupt them. Such processes include resetting the clock, updating the console, executing high-priority functions (typically to do with the terminal), updating timing information about the process being executed (or, more precisely, the one that was running before it was interrupted by the clock driver), and updating the system time. The second sort of task can be carried out with less strict regularity. They can be put off at no great risk if the system is overworked.

71This sort of temporality, dictated by the ticks of a clock, might seem like a simple copy of a synchronous von Neumann machine. Aren’t both a sort of activity triggered periodically by clock interrupts? The crucial thing to notice—beyond the significant change in physical scale, from nanoseconds to hundredths of a second—is the change in the nature of the actions regulated by this new temporality.

72It is no longer a question of the machine’s elementary operations (the process, the memory, its inputs and outputs), but of process management—richer actions, in other words. Processes extend farther in time; the machine’s basic operations occupy a finite time, [18] whereas a process can take arbitrarily long. And processes are also more extended spatially: they have new properties, recorded in the descriptors’ data structures, and are associated with a data stack in memory—i.e. with a region of space. The clock driver’s time regulates another activity, one belonging to the level of the operating system, one thicker than that of the underlying mechanical ontology.

73How should we characterize this time? Unlike that of the PDP-11, it can be carefully administered. We can control it, just as the list of available memory allows us to control space. We can consequently put actions off until later, alternate processes to make them appear simultaneous on the macro level (a procedure known as pseudo-parallelism), and execute actions depending on their priority. Such time is still the measure of activity. But it is the measure of the high-level activity of processes—and, examined closely, it contains many low-level actions and objects: operands in processor registers, data in memory, and so on. We have before us the technical implementation of the idea that there may exist multiple interlocked temporalities, and that these different levels of temporality correspond to different degrees of activity.

Conclusion

74Computer science offers pure philosophy the chance to extend a line of thought opened up by mathematics, that other great formal science. Speaking in 1939 to the Société française de philosophie, Jean Cavaillès said:

75

I do not seek to define mathematics, but, by way of mathematics, to know what it means to know, to think; this is basically, very modestly reprised, the question that Kant posed. Mathematical knowledge is central for understanding what knowledge is.
(Cavaillès 1939)

76We believe that computer science is the (necessarily formal) science of inscribing rational processes within matter in general. Computational activity is just as important for understanding activity as mathematical knowledge is for understanding knowledge.

77Following Hourya Benis Sinaceur, we are convinced that “the analysis of scientific texts is a less widespread exercise than that of literary texts, but a no less instructive one” (Sinaceur 1991, 21). We have tried to draw two conclusions from the texts analyzed here. The first is that there may exist multiple levels of activity, and that higher-level actions—here, processes—can bring together lower-level actions and objects, even if they conceal these. The second is that these different levels of activity correspond to as many different scales of temporality. This is a sign that, in computer science as in metaphysics, time is just as essential for activity in general as space is for objectivity.

Notes

  • [1]
    This is true of Unix “special files,” which are typically devices (Ritchie and Thompson 1974, 367), or else pipes representing a number of joined processes.
  • [2]
    By comparison, the speed of commercial computers is now measured in gigahertz (billions of operations per second).
  • [3]
    There were also attempts to create asynchronous machines—inspired, for instance, by the work of John Backus (Backus 1978). Very early on, machines also combined synchronous and asynchronous elements. The PDP-11/40, for instance, included a measure of asynchronism (PDP11/40 Processor Handbook, 1-1).
  • [4]
    More precisely, around one basic operation is carried out. In some cases, several operations are carried out if the machine allows processor-level parallelism. On the other hand, sometimes only part of a basic operation is carried out if the processor has been given a series of complex instructions. This is typically the case for CISC processors, in contrast to RISC processors, whose operations generally fit within a single clock cycle. Things have now become even more complex, with CISC and RISC in some cases combined.
  • [5]
    This is true for Unix and even more so for its descendants, although the number of system calls has grown considerably. (In June 2017, Linux 4.11.8 provided more than 300.) In 2000, Linux 2.2.14 had the calls time, ftime, and gettimeofday for determining the date and time, settimer and alarm for sending periodic signals, and adjtimex for synchronizing clocks (Bovet and Cesati 2007, ch. 5).
  • [6]
    This awkward term allows us to distinguish, psychologically, between such arbitrary constants and those others that are respectively called “fundamental constants” (ll. 100–7) and “signals” (ll.110–26), and that are preceded in the source code by stern warnings: “fundamental constants: cannot be changed” and “signals: don’t change.”
  • [7]
    This value has changed little over time. The creators of Unix chose a frequency of 60 interrupts in 1975. Today, Linux sets a frequency of 100 interrupts per second for most machines. The order of scale has barely shifted in forty years. See in the Linux source code the param.h files in the sub-directory of include/ for the different architectures (Linux 2000; Linux 2017).
  • [8]
    The comment on line 1556 might mislead us into thinking that only the core is being initialized and freed. But the function also affects the swap.
  • [9]
    The constant USIZE is set at 16 in systm.h.
  • [10]
    For instance, an emulator is available at http://pdp11.aiju.de/ (accessed October 22, 2017). To launch Version 6 Unix, the user clicks the “run” button, types “unix,” and presses return.
  • [11]
    Thankfully, this distinctive material feature disappeared from later versions of Unix like MINIX and Linux, whose code aimed to be more general. From Unix Version 7 onwards, the code was made more general—a necessary condition for running the system on the Interdata 8/32 (Tanenbaum 2009, 713).
  • [12]
    A line frequency clock receives its frequency from a 110- or 220-volt electric current. It causes an interrupt at each cycle, at either 50 or 60 Hz (Tanenbaum and Woodhull 2006, 204; DEC 1972a, E-5; DEC 1972b; DEC 1973, 2-1a).
  • [13]
    As John Lions notes, “addresses in the range 0160000 to 0177777 are mapped into the range 0760000 to 0777777” (Lions 1996, 2-5c).
  • [14]
    In the C programming language, in 1975 as today, octal numbers begin with a 0 (Lions 1996, 3-5a; Kernighan and Ritchie 2016, 37). The number 0115 is thus the same as decimal 77. We have been unable to find any explanation for this choice of value in the KW11-L or KW11-P manuals (DEC 1973; DEC 1972b).
  • [15]
    As Lions observes, lines 1628–35 are, oddly, never executed because the function newproc() defined in slp.c (ll. 1826–1919) always returns 0 (Lions 1996, 6.4a).
  • [16]
    In Linux, the relevant files are primarily kernel/time.c and kernel/sched.c.
  • [17]
    This structure is only used by the function ttrstrt (Lions 1996, 11-1b and 25-3bc).
  • [18]
    See the PDP-11/40 manual: “In the most general case, the Instruction Execution Time is the sum of a Source Address Time, a Destination Address Time, and an Execute, Fetch Time” (DEC 1972a, C-1). The whole of appendix C of this manual contains very precise estimates of the execution time for each of the PDP-11/40’s instructions.
English

This article examines the concept of time in the Unix operating system, showing on the one hand how it differs from that of the machine and, on the other, how it is connected to Unix’s distinctive concept of action, the process. We begin with a discussion of the temporality used by the synchronous machines defined by John von Neumann in the “First Draft of a Report on the EDVAC” (1945), before analyzing in detail the source code of the main function and of the clock driver of Version 6 of Unix (1975). He shows that, in computer science, different levels of activity correspond to as many different scales of temporality.

Reference list

  • ARISTOTLE. 1936. Aristotle’s Physics. Edited by William David Ross. Oxford: Clarendon Press.
  • OnlineBACKUS, John. 1978. “Can Programming be Liberated from the Von Neumann Style? A Functional Style and its Algebra of Programs.” Communications of the ACM 21, no. 8: 613–641.
  • BOVET, Daniel, and Marco CESATI. 2007. Understanding the Linux Kernel. Sebastopol: O’Reilly Media.
  • CAVAILLÈS, Jean. 1939. “Mathematical Thought.” Urbanomic, accessed October 22, 2018, https://www.urbanomic.com/document/mathematical-thought/.
  • DEC. 1972a. PDP11/40 Processor Handbook. Maynard, MA: Digital Equipment Corporation.
  • DEC. 1972b. KW11-P Programmable Real-Time Clock Manual. Maynard, MA: Digital Equipment Corporation.
  • DEC. 1973. KW11-L Line Time Clock Manual. Maynard, MA: Digital Equipment Corporation.
  • KERNIGHAN, Brian, and Dennis RITCHIE. 2016. The C Programming Language. Englewood Cliffs, NJ: Prentice Hall.
  • KRAKOWIAK, Sacha. 2014. “Les débuts d’une approche scientifique des systèmes d’exploitation.” Interstices, February 3. Available at: https://interstices.info/jcms/int_70839/les-debuts-dune-approche-scientifique-des-systemes-dexploitation.
  • LAZARD, Emmanuel, and Pierre MOUNIER-KUHN. 2016. Histoire illustrée de l’informatique. Paris: EDP Sciences.
  • LINUX. 2000. Kernel 2.2.14. Available at: http://www.kernel.org.
  • LINUX. 2017. Kernel 4.11.8. Available at: http://www.kernel.org.
  • LIONS, John, Dennis M. RITCHIE, Ken THOMPSON, Peter H. SALUS, Greg ROSE, et al. 1996. Lions’ Commentary on UNIX 6th Edition with Source Code. Charlottesville: Peer-to-Peer Communications.
  • OnlineMÉLÈS, Baptiste. 2013. “Unix selon l’ordre des raisons: La philosophie de la pratique informatique.” Philosophia Scientiæ 17, no. 3: 181–198.
  • MINIX. 2006 [1997]. In Operating Systems. Design and Implementation, edited by Andrew S. Tanenbaum and Albert S. Woodhull. Upper Saddle River: Prentice Hall, Appendix A, 629–636.
  • OnlineVON NEUMANN, John. 1945. “First Draft of a Report on the EDVAC.” Contract No. W–670–ORD–4926 between the United States Army Ordnance Department and the University of Pennsylvania. Moore School of Electrical Engineering: University of Pennsylvania, June 30, 1945, section 2.
  • OnlineRITCHIE, Dennis M., and Ken THOMPSON. 1974. “The UNIX Time-Sharing System.” Communications of the ACM 17, no. 7: 365–375.
  • OnlineRITCHIE, Dennis M. 1978. “The Unix Time-Sharing System: A Retrospective.” Bell Labs Technical Journal 57, no. 6: 1947–1969.
  • OnlineRITCHIE, Dennis M. 1984. “The Evolution of the Unix Time-sharing System.” AT&T Bell Laboratories Technical Journal 63, no. 6, part 2: 1577–1593.
  • ROCHKIND, Marc J. 2004. Advanced UNIX Programming. Boston, MA: Pearson Education.
  • SALUS, Peter. 1994. A Quarter Century of UNIX. Reading, MA: Addison Wesley.
  • SINACEUR, Hourya. 1991. Corps et modèles. Essai sur l’histoire de l’algèbre réelle. Paris: Vrin.
  • TANENBAUM, Andrew. 2009. Modern Operating Systems, 3rd edition. Upper Saddle River: Prentice Education.
  • UNIX. 1975. In Lions’ Commentary on UNIX 6th Edition with Source Code, 1996, edited by Lions et al. Charlottesville: Peer-to-Peer Communications, 1–90.
  • TANENBAUM, Andrew, and Albert S. WOODHULL. 2006 [1997]. Operating Systems. Design and Implementation. Upper Saddle River: Prentice Hall.
  • WALDEN, David, and Tom VAN VLECK, eds. 2011. The Compatible Time-Sharing System. 1961-1973. Fiftieth Anniversary, Commemorative Overview. Washington: IEEE Computer Society.
Uploaded on Cairn-int.info on 25/02/2019
Cite
Distribution électronique Cairn.info pour La Découverte © La Découverte. Tous droits réservés pour tous pays. Il est interdit, sauf accord préalable et écrit de l’éditeur, de reproduire (notamment par photocopie) partiellement ou totalement le présent article, de le stocker dans une banque de données ou de le communiquer au public sous quelque forme et de quelque manière que ce soit.
keyboard_arrow_up
Chargement
Loading... Please wait