Software and Mind by Andrei Sorin — related articles
The mechanistic myth and the software frauds
Using the mechanistic myth as warrant, the universities foster invalid software theories;
and the software companies create useless systems based on these theories. Along
with industry experts and professional associations, these institutions are promoting
fraudulent software concepts in order to prevent independence and expertise in software-
The mechanistic myth
The mechanistic myth is the belief that every phenomenon can be described as a hierarchical structure of elements; that is, as elements within elements, on lower and lower levels. This is the same as saying that every phenomenon can be explained. All we have to do is discover a hierarchical structure that reduces it to simpler and simpler phenomena, one level at a time, until we reach some trivial ones.
For example, a problem in academic research can be solved by breaking it down into simpler problems, then breaking those down into even simpler ones, and so on, until we reach problems simple enough to solve directly. A complicated machine can be built by designing it as levels of subassemblies, as parts within parts, down to some simple parts that can be made directly. And a software application can be developed by breaking it down into separate modules, each module into separate constructs, and each construct into separate statements, which can then be programmed directly. The term “mechanism” derives from the fact that in the seventeenth century, when this ideology was born, mechanics was the only science that offered exact explanations; so it was believed that every phenomenon could be explained by reducing it, ultimately, to simple mechanical phenomena.
This ideology promises two benefits. The first one is its capacity to replace any
challenge with a series of relatively simple steps: instead of trying to solve a
difficult problem, what we do now is reduce it to simpler and simpler ones. Whether
the problem entails a transition from high to low levels or from low to high (that
is, from the whole to its parts or from parts to the whole), it is easier to solve
it by dealing with one level at a time. Also, we can sever the links between levels
and separate a structure into smaller ones that remain, individually, correct hierarchies.
Since each element at a given level depends only on the lower-
The second benefit is the potential to explain, precisely and completely, any phenomenon.
Each element in the structure is a function of the lower-
* * *
The mechanistic ideology is a myth, because most phenomena cannot be represented
with an isolated hierarchical structure. This is easy to understand if we recall
how these structures are formed. A structure’s elements possess certain attributes,
and the hierarchical relations between them are based on these attributes. Elements
that share one particular attribute (even if the attribute has a different value
for each one) are related and form an element at the next higher level. Several such
elements are formed from other groups of elements, related through other attributes.
Then the new elements are similarly related to form the next higher-
The mechanistic delusion should now be obvious: mechanism is valid only for phenomena that fulfil the conditions just described; it cannot be simply assumed to work for any phenomenon. Specifically, if a phenomenon is made up of elements whose attributes give rise to the relations needed to create a single hierarchical structure, mechanism works; but if the elements possess attributes that relate them also in ways that are not part of one structure, mechanism fails.
The elements that make up real-
A phenomenon that cannot be represented mechanistically needs several hierarchical
structures if we want to include all important attributes. And these structures cannot
be studied separately, because it is their totality and their interactions that depict
the phenomenon. This is the only way to relate the same elements based on several
attributes at the same time, and thus attain a useful approximation of the phenomenon.
But then we lose the benefits of mechanism, which can only be attained with a single,
isolated structure. In a system of interacting structures, each element no longer
depends just on the lower-
The isolated structures and the phenomena they depict are called simple, or mechanistic;
the systems of interacting structures and the phenomena they depict are called complex,
* * *
An example of a mechanistic phenomenon is the manufacturing process. The parts that
make up an appliance possess many attributes: dimensions, weight, cost, colour, supplier,
delivery date, life expectancy, etc. But we purposely design the appliance in such
a way that only those attributes that determine the position and function of each
part are important in the assembly process, while attributes like cost and supplier
can be ignored. The assembly operations can then be correctly represented with a
hierarchical structure. The delivery operations, or the accounting operations, or
the maintenance operations can also be represented with hierarchical structures;
but these are different structures, based on other attributes (supplier, delivery
date, cost, etc.). The parts with all their attributes are a non-
An example of a non-
In conclusion, mechanism works in domains like the exact sciences, engineering, manufacturing, and construction because their phenomena can be accurately represented with isolated structures; and it fails in domains like psychology, sociology, linguistics, economics, and software because their phenomena consist of structures whose interactions cannot be ignored.
The traditional mechanistic frauds
To understand the software frauds we must start by examining the traditional ones, for their fallacies are the same. The academics take the mechanistic myth as unquestionable truth. The belief that every phenomenon can be studied by reducing it to isolated structures is seen as the only valid method of science, and is the foundation of all academic research. Doubting this belief is tantamount to doubting science. But if the phenomena involving minds and societies can only be represented with systems of interacting structures, the mechanistic study of these phenomena is a fraud. In three hundred years of mechanistic philosophy, not one mechanistic model was successful in the human sciences.
The academics like mechanism because this myth affords them a privileged position in society regardless of whether their activities are useful or not. As long as we accept mechanism unquestioningly, all they have to do to gain our respect is practise mechanism. It is irrelevant whether their theories work or not, or whether mechanism is valid at all in their field.
* * *
A good example of this corruption is the linguistic theory known as Universal Grammar. Introduced in the 1950s, and starting with the premise that the grammatical structure is the only important one, this theory attempts to represent mathematically all sentences that are grammatically correct (and to recognize mathematically those that are not) in a natural language like English. This absurd idea, derived from nothing more substantial than the observation of a few linguistic patterns, was enough to legitimize a vast research program, involving thousands of academics. Thus, for more than half a century, the mechanistic dogma has been the only justification for the pursuit of a fantasy in the world’s most prestigious universities.
Universal Grammar started as a small set of simple principles. But when this naive
attempt failed to explain more than a few English sentences, an endless series of
new versions and sub-
This theory also exemplifies how the mechanists delude themselves and the public about the value of their work. They begin by announcing a theory that claims to explain with mathematical precision a certain complex phenomenon. The theory is just a speculation at this point, although it may work in a few simple situations. The mechanists merely noticed a pattern in the phenomenon, and they even discovered perhaps a mathematical representation. But this is a trivial achievement: all they did is to extract one of the structures that make up the phenomenon; and isolated structures, of course, can be represented mathematically. They did not prove that the other structures are unimportant and can be ignored.
Then, since the theory is generally useless, the mechanists start an endless process of “improvements”: they modify the theory to cover up its failures, again and again, while describing this activity as research. In reality, what they do is acknowledge the importance of the other structures, which they originally ignored. Also, they introduce artificial and increasingly complicated means to restore the interactions between these structures and the one they isolated. But this work is futile, because the interactions in a complex phenomenon cannot be described with precision. The theory appears to improve, but it never attains the promised benefits – a useful mechanistic representation of the phenomenon. So it is eventually abandoned, usually when a new theory becomes fashionable in that domain; the whole process is then repeated with the new theory. These theories, thus, are fraudulent from the beginning, because we could always tell that the phenomenon consists of interacting structures and a mechanistic theory cannot work.
* * *
An important aspect of the mechanistic myth is the process of peer review – the academic
system of controls believed by everyone to ensure rigour in research work. But peer
review only verifies that the work adheres to the mechanistic principles; it does
not verify whether these principles are valid in the field concerned. So peer review
is in reality part of the fraud: since it is grounded on the same premise as the
research itself – the belief that mechanism is valid in all fields – it is meaningless
as control. All it can do is confirm that the research is correct within the mechanistic
ideology. It is a self-
Another fact worth mentioning is that these theories are easily shown to be pseudoscientific
when analyzed with Karl Popper’s well-
The software mechanistic frauds
In the past, it was only in universities that individuals could pursue mechanistic
fantasies that looked like serious activities. Through software, however, the pursuit
of mechanistic fantasies has become possible everywhere. Here we are discussing the
world of programming, but similar software-
This started around 1970, when the academics decided that the phenomena associated
with programming must be reduced to a mechanistic representation. Rather than depending
on such uncertain qualities as the knowledge and skills of programmers, said the
academics, the mechanistic ideology will permit even inexperienced persons to write
software. Then, lacking real-
Unlike the mechanistic theories in the human sciences, however, which had little bearing on our activities outside academia, the mechanistic programming theories were embraced with enthusiasm by individuals, businesses, and governments. Unaware of the long history of mechanistic delusions in universities, millions of practitioners working in the real world believed the claims made by the academics and actually tried to develop software applications using these theories. This naivety was encouraged by respected computer associations and institutes, and by renowned experts and gurus, who praised and taught the theories and the related methodologies. Then the software companies started to create various development systems that incorporated these concepts, and soon any programming done using just skills and experience, rather than depending on the latest systems, was condemned as unprofessional.
Thus, software mechanistic concepts that are in fact as worthless as the traditional mechanistic ones are now dominating the world of programming, preventing expertise and making software development far more complicated and expensive than it ought to be. As a result, instead of a true programming profession, a huge software bureaucracy has evolved. Just like the academic bureaucrats, the software bureaucrats are trusted and respected by everyone simply because they practise mechanism. It is irrelevant how inefficient their work is, and whether the resulting applications are adequate or not.
* * *
Like the traditional mechanistic theories, the software theories and systems keep failing and are continually modified in an attempt to make them useful. But this only makes them more complicated. In the end, the only way to make them useful is by reinstating the traditional concepts. Every mechanistic principle must be annulled, so the theories and systems lose all the benefits claimed for them; but they continue to be promoted with the same claims. To appreciate this fraudulent evolution, let us review first the nature of software applications, and why it is impossible to represent them mechanistically.
The elements that make up an application (statements, blocks of statements, modules) possess certain attributes. Any process that can affect more than one element gives rise to an attribute, because it relates the elements logically: memory variables, database fields, file operations, subroutine calls, business practices, and so on. And, as we saw earlier, the relations between elements generate hierarchical structures. If we pick just one of these attributes, or just a few, we may be able to depict the relations with one structure. But if we take all attributes into account (which we must, because they are all important), we need many structures to depict the relations. These structures exist at the same time and interact, because they share their elements; they cannot be isolated or created separately.
Software applications, then, are complex structures. The reason is that they must reflect accurately our personal, social, and business affairs, which themselves consist of interacting structures. It is absurd to search for ways to represent applications with isolated structures, as the software mechanists do, seeing that isolated structures cannot possibly provide accurate approximations of our affairs. Thus, an application developed using strictly mechanistic principles is necessarily useless. Language too consists of interacting structures, as we saw, and for the same reason: it must reflect accurately our affairs. Both natural languages and programming languages have the qualities needed to generate interacting structures; but both require also a human mind, because only minds can process these structures. When separating the structures, the mechanists forsake those qualities; so it is not surprising that their theories fail.
* * *
Everyone agrees that it is possible to create applications using just our minds and
the traditional programming languages and methods. We start with a combination of
such elements as the statements of a typical language, lower-
In both cases, we follow a concept we all understand intuitively: combining simple things into more and more complex things in the form of a hierarchical structure. But unlike such structures as the parts and subassemblies of an appliance, in the case of software and language we must create several hierarchical structures from the same elements at the same time. For example, even a small software element may include several memory variables and database fields, a file operation, an accounting method, and some subroutine calls; and through each one of these attributes it belongs to a structure that relates it logically to other elements that possess that attribute. Thus, while creating that element we must also be aware of those structures and the other elements. We do this by using our minds and the skills we acquired through practice.
While not disputing the fact that we can create applications using nothing but the traditional concepts, the software mechanists claim that it is possible to simplify this task, speed it up, and, generally, turn it into an exact and predictable activity. Like all mechanists, they invoke the benefits of the hierarchical structure, but without proving first that the phenomena associated with software applications can be reduced to isolated structures. Depending on the theory, they claim one of two things: either that the whole application can be treated as one hierarchical structure, or that its constituent structures can be extracted and studied separately.
The next step is to impress naive and inexperienced practitioners by demonstrating
the benefits of the hierarchical structure with trivial examples, and by hailing
this concept as a revolution, a new paradigm, etc. In other words, they rediscover
the hierarchical structure and its benefits with each new theory. Then, even though
no one can create real-
These theories are a fraud from the beginning, because the claimed mechanistic benefits
are relevant only for isolated structures, not for the system of interacting structures
that is the application. Even when the theory extracts one of the structures and
appears to work, the benefits are lost when we combine that structure with the others
to create the actual application. One benefit, we saw, is the ability to use starting
elements that already include other elements. But this quality, even when valid for
an isolated structure, is actually a handicap for the final application, because
fewer features can be implemented. Since even a small element is shared by several
structures through its attributes, if it is replaced with a higher-
The claimed mathematical precision is also irrelevant. If the theory applies to the complex structure that is the whole application, the claim is clearly invalid, because only isolated structures can be represented mathematically. But even if it applies to isolated structures and the mathematical benefit is real, we cannot develop these structures separately and then combine them, because they share their elements. Ultimately, even if using a mechanistic theory, we must create the application’s elements by considering several structures at the same time, the way we always did, and mathematics cannot help us.
* * *
As the practitioners struggle with each new mechanistic theory and with the methodologies
and development systems derived from it, they must reconcile the crippling deficiencies
they note with the intimidating propaganda conducted by the software charlatans.
Their difficulties, the practitioners are told for each theory, stem from clinging
In reality, the difficulties are due to the need to solve complex real-
When analyzed, though, the changes are not new features at all, but a return to the traditional concepts. They are given fancy names and are described with pretentious terms, but these are in fact ordinary features that were always available, through traditional programming languages. So the changes are in reality a reversal of the mechanistic concepts – the concepts on the strength of which the theory was originally promoted. The theory, and the systems derived from it, are now silly and pointless. But the academics continue to extol their benefits, even as they are canceling these benefits by annulling the mechanistic concepts. And the practitioners continue to depend on them.
* * *
What the software elites have achieved through this stratagem is to dominate the world of programming and software use. This domination is unjustified, because what they can give us – software tools based on mechanistic principles – has no value. The only thing we need in order to create and use software is personal skills and experience, and a few relatively simple tools. But the elites manage to convince us that we can be more productive if we agree to depend instead on some complicated theories, methodologies, and development systems.
Then they modify these expedients to make them useful, by replacing their mechanistic
features with non-
Also, to enhance this dependence, the elites refuse to keep the traditional languages up to date: features made possible by general advances in hardware or software, and which have nothing to do with the mechanistic theories, are implemented only in some new systems. This stratagem makes the latest systems appear superior, when in fact the same features could be used also with the traditional languages.
As programmers or as software users, we must practise our profession if we want to
improve our skills. When we agree to depend on expedients instead, all we learn is
how to use the expedients. Our skills remain undeveloped, so we believe that the
only way to advance is by depending on newer expedients, in a process that feeds
on itself. This ensures a continued domination by the elites in our software-
This section discusses some of the best-
This promise, however, was a fraud. Its purpose was to get businesses to abandon the ordinary languages, and to depend instead on proprietary development systems controlled by the software elites. In reality, these systems provide no benefits. The 4GL idea is a fraud because the fourth generation is not, relative to the third, what the third is relative to the second. Practically all features (the use of memory variables, conditional and iterative constructs, subroutine calls, etc.) became simpler and entirely different in the transition from second to third generation, but remained unchanged in the transition from third to fourth. The elites started by promising a simpler, higher level, and ended by reverting to the previous level and to the same programming challenges we faced before.
Higher starting levels are practical, perhaps, for specialized applications, in narrow
domains; but for typical business applications we cannot start from levels higher
than those found in 3GL’s. There are indeed a few higher-
The theory known as structured programming claimed that the structure which represents
the application’s flow of execution is the only important one. It doesn’t even mention
other structures. The flow of execution is determined by flow-
To attain that hierarchical structure, we must restrict ourselves to three elementary
* * *
Structured programming is a very naive theory, and shows how little the academics understand about the nature of software. Not only did they believe that the flow of execution can be extracted from the system of structures that make up an application (and that the other structures can be ignored), but they failed to see that the flow of execution is itself made up of many structures. Thus, the idea that a neat flow diagram can mirror the complex flow of execution is absurd.
Here is how this complexity arises. Each conditional and iterative construct includes
a condition, which is evaluated while the application is running. This yields a different
value, and hence a different path of execution, at different times. There are thousands
of such constructs in a serious application, and an infinity of combinations of condition
values. Each combination gives rise to a different flow-
And this is not all. The conditions involve such processes as memory variables, database
fields, file operations, and business practices; and each process gives rise to an
attribute, which relates some of the application’s elements through a structure.
The conditions, therefore, link the flow-
The transformations, it turned out, were so complicated that many software elements
could not be reduced effectively to standard constructs. Also, no one managed to
represent mathematically, as promised, even those elements that were reduced (because
they involved other structures apart from the flow of execution, obviously). In the
end, the charlatans who were promoting structured programming had no choice but to
permit various non-
Structured programming, thus, was a fraud from the beginning. Since the flow of execution – to say nothing of the whole application, with its infinity of other structures – cannot be reduced to a simple hierarchical structure, the promised benefits were a fantasy. Nevertheless, practitioners to this day are wasting their time and complicating their applications by obeying the theory, in the hope of attaining those benefits. (The full discussion of this fraud can be found in the section “Structured Programming” in chapter 7 of Software and Mind.)
The theory known as object-
The secret behind this new technology, said the theorists, is the hierarchical concept:
if we implement our applications as strict hierarchical structures of software elements
(now called objects), we will be able to combine, easily and effectively, existing
parts and new ones. One day, developing a new application will entail little more
than putting together ready-
This high level of reuse can be achieved, the theorists assured us, thanks to the hierarchical property known as inheritance. Each element, in addition to possessing its own, unique attributes, inherits the attributes of the element at the next higher level. And, since that element inherits those from even higher levels, an element can benefit from the attributes of all the elements above it in the hierarchy. (This is a result of the way hierarchies are created: an element at a given level has only those attributes shared by the elements at the lower level. This property, called abstraction, is simply the property of inheritance viewed in reverse.)
Thus, as we move from high to low levels, an element can have more and more attributes,
even though only a few are its own. Since the attributes of an element reflect the
processes it is capable of, we can attain any combination of processes by using both
existing and new parts. If we include an existing part at an appropriate level in
the structure of elements, an element at a lower level will be capable of that part’s
processes in addition to those of higher-
* * *
It is possible, in principle, to combine these hierarchies as one within another
and keep their totality as a simple structure, as the theory says. But the resulting
application would be useless, because it would be capable of just a fraction of the
interactions required between the structures. Here is a simple example. Suppose we
have three hierarchies, for accounting, database, and display functions. Even if
they are correct and complete, we cannot use them to create an object-
Another delusion is in ignoring the structure that is the flow of execution. As we
saw under structured programming, this is a complex structure, determined by the
* * *
As usual, the software charlatans covered up the failure of object-
The original object-
In particular, those all-
The second change was to degrade the notion of inheritance, from a precise, immutable property of hierarchical structures, to a vague operation that can do almost anything. Specifically, the attributes inherited by a given element can be modified, and even omitted; also, the element can inherit attributes from elements in other hierarchies too, not just from the higher levels of its own hierarchy. With this change, therefore, an element can have any attributes we like. But attributes determine how elements are related to form hierarchical structures. Thus, if an element can be related to the others in any way we like, the application can have any number of additional structures; it is no longer restricted to one hierarchy. By annulling the fundamental principle of inheritance, this change has restored the freedom to implement requirements without the hierarchical restrictions.
In the end, abolishing the object-
The relational database model
The theory behind the relational database model claimed that it is possible to turn
database design and use into an exact, error-
To appreciate the absurdity of these claims, let us start with a brief summary of the traditional database concepts. A business application’s data is stored mostly in the form of indexed data files; that is, data files and the associated index files. The data files contain the actual data, organized as records. Each record has a number of fields, which contain the basic values needed by the application (customer number, product description, transaction date, and the like). The index files are based on the values present in certain fields (called key fields), and are needed in order to access specific data records, or to scan records in a specific sequence; thus, a data file may have several index files. Most files must be interrelated, and this is done by using their fields; for example, if an invoice file has fields for customer and invoice numbers, we will be able to relate it to the customer file and to select the records representing a certain customer’s invoices.
Several operations, provided by a file management system, are needed to manipulate
indexed data files; in particular, to add, read, modify, and delete records. In the
application, these operations, as well as the data records and fields, are handled
using the regular features of a programming language – the same features used to
* * *
The theorists noticed a similarity between the elements of data files and those of the logic system known as predicate calculus, and jumped to the conclusion that a practical database model can be invented based on this similarity. The concept of data files was replaced with the concept of tables, while the records and fields became the rows and columns of tables. There are key fields just as in the traditional data files, but no index files (their function is intrinsic to the new database operations).
Then, said the theorists, if we restrict the data stored in tables so as to match the entities permitted in predicate calculus, the traditional file operations can be replaced with operations based on mathematical logic: selecting certain rows or columns from a table, combining rows or columns from two tables, and so forth. The operations start with one or two tables and produce as result a new table. (To access individual records or fields, we create tables with only one row or one column of a row.) Thus, compared with the traditional operations, the relational operations are simpler. Also, since they are mathematical, the resulting data is guaranteed to be an accurate reflection of the original data.
The relational model is a typical mechanistic fantasy. In order to attain an exact, mechanistic representation of the database, the theorists extracted it from the system of structures that make up the application. Then they simplified it further by excluding all features that did not correspond to their intended mathematical model. In the end, what was left of the database was indeed mathematical – but it had lost all practical value.
The model of an isolated database is perhaps an interesting theoretical concept, but in practice the database is part of an application, so this model is useless. The theorists deliberately separated the database from the application in order to avoid the complexity of their interactions, and then promoted the exactness of the resulting model as a benefit for the whole application. It is not surprising that academic charlatans pursue such fraudulent ideas; but it is shocking to see practitioners and businesses accept them.
* * *
Let us examine the mathematical claims. Since the relational model was meant from the start for databases separated from their applications, the mathematical benefits were always a lie. The model guarantees that the result of an operation is correct if the data in the database is correct. But the correctness of the data depends on the application’s requirements, so it must be determined in the application, outside the scope of the model; moreover, the requirements cannot be expressed mathematically. In the end, data validity must be enforced the way it always was. Thus, since a system cannot be more exact than the least exact of its parts, the relational operations are ultimately no more exact than the traditional ones.
The theorists claim also that the database design process is mathematical, and this too is a lie. This process, called normalization in the relational model, entails decisions for the format, use, and dependency of fields in interrelated files. Relational textbooks impress us with difficult terminology, definitions, theorems, formulas, and diagrams, but despite the mathematical tone, normalization is not a mathematical theory. It is just a formal study of field dependencies, and cannot help us in the design process. All design decisions must be made informally, using personal skills, just as they are for traditional databases (because they depend on the application’s requirements, which cannot be reduced to mathematics, and lie outside the scope of the model in any case). Thus, the actual design examples shown in textbooks following the many pages of formal discussion are, ironically, informal.
* * *
The separation of the database from the application caused more than the loss of
the promised mathematical benefits. Since an application must interact continuously
with its database, many changes had to be made over the years to reinstate the links
that the original model prevented. Another problem was the restriction to operations
on files (tables). These operations are indeed simpler, but most links with the application
must be at the lower levels of records and fields, and the additional operations
required were found to be too complicated and too slow. Thus, changes also had to
be made to reinstate the low-
In the end, practically all relational concepts had to be abandoned. They were replaced with concepts that allow us to do, in roundabout and complicated ways, exactly what the traditional database concepts allowed us to do all along. The relational model continued to be promoted, though, with the original promises. The following discussion is necessarily only a brief survey of this degradation.
Strict data normalization was impractical, so this fundamental relational principle
was annulled and replaced with the informal criteria of the traditional design methods
(through new “features” like the ludicrously named denormalization and non-
Then, in order to reduce the need to link the database entities to the application’s entities, and also to provide the means to access individual records and fields, more and more parts of the application were moved into the database system in the guise of new relational features. This stratagem started with data validation functions, but quickly expanded to allow any operations. And as the operations became increasingly varied, special languages were invented to support them. Thus, operations that were naturally and logically implemented in the application using ordinary languages were replaced with complicated alternatives for no other reason than to bypass the relational restrictions. And all this time, the shift was presented as an enhancement of the relational model.
SQL is known as the official relational database language, but in reality it is the
official means, not of implementing, but of overriding, the relational principles.
From its modest beginning as a query language, SQL has grown to its enormous size
as a result of the enhancements introduced in order to provide full-
But SQL also allowed the relational charlatans to abolish the last remnants of their fantasy: the relational database operations are no longer used, and we access files through new operations that are practically identical to the traditional ones (for example, we can add, read, modify, and delete individual records, scan records one at a time, and work with individual fields).
Finally, moving more and more parts from the application into the database system made programming so complicated that a new feature had to be invented to move them back into the application. This feature, known as embedded SQL, lets us implement in a traditional language the entire application, including all database requirements, and invoke SQL statements here and there as needed. So applications look now about the same as those that use the traditional database concepts.
The relational database model is one of the greatest frauds ever perpetrated in a modern society. And the ease with which the software charlatans have persuaded practitioners to forsake proven database management principles and to depend instead on the relational imbecilities is a vivid demonstration of the incompetence and corruption that permeate the world of programming. (The full discussion of this fraud can be found in the section “The Relational Database Model” in chapter 7 of Software and Mind.)