Declarative versus Imperative

I responded to a recent discussion Declarative vs imperative programming on the Types mailing list, but my post was rejected because of a bad mail header. By the time I got around to fixing it, the discussion was old. But for what it's worth, here are my thoughts:
I have been thinking about the use of the words "declarative" and "imperative" for some time. This is my understanding of how they are commonly used in computer science today:
Declarative: describing "what" is to be computed rather than "how" to compute the result/behavior
Imperative: a description of a computation that involves implicit effects, usually mutable state and input/output.
As others have pointed out, our usage originally came from the distinction between imperative and declarative statements in natural languages. However, I think that our usage has diverged significantly from this origin, so that the words no longer form a dichotomy.
For example, one might argue that a finite state machine is both declarative and imperative in the computer science senses. I think Uday made a similar suggestion. There is also Wadler's classic paper on "How to declare an imperative".
Another hint that "declarative" and "imperative" are not antonyms is that the definitions don't have any significant words in common. The antonym of "imperative" in common usage is "pure functional". I don't know of a widely-used word that acts as an antonym for "declarative", although "operational" might work. It may be that "imperative" has a connotation of "step-by-step", which hints at it being the opposite of "declarative", but this connotation is fairly weak at this point. If we wanted to we could try to force “declarative” and “imperative” to be antonyms, possibly by redefining what we mean by “imperative”, but I'm not sure that would be an improvement.
I agree with those who say that "declarative" is a spectrum. For example, some people say that Haskell is a declarative language, but I my view Haskell programs are very much about *how* to compute a result. It is true that many details about how are left out (memory management, order of operations, etc). But if you compare a Haskell program with a logical specification (pre/post conditions), they are quite different. Thus while I would say Haskell is more declarative than many other programming languages, Haskell is not a declarative language in the strongest sense of the word. Haskell programs are not specifications, they are computations, in the sense that they say how to compute and answer.
Here is a quick shot at a spectrum between "how" and "what". Each level has a quick summary of the "how" that is involved, and it also includes all the "hows" listed below them. I suspect that many of you might disagree with the placement or absence of various languages, so I cannot claim that this list is definitive.
 More "How"
How the machine works
How memory is managed
   C, C++, etc
Order of operations
   Java, Smalltalk, Python, Ruby, etc
How data flows (with issues like nontermination and cut)
   Haskell, Prolog, Lambda Calculus (in various forms)
----split between Programming and Specification Languages---
Restricted Specification Languages
   BNF, SQL, Excel, Statecharts
Logical specification languages
   VDM, Z, B, Alloy
 More "What"
The idea that a specification language (by definition) cannot be executed is widely held but false. I consider BNF to be a simple counter example. BNF is clearly a specification language, and it clearly has efficient execution (parsing) strategies.
As for the objects/type distinction that sparked this discussion, I think the discussion was pretty reasonable, and I would like to thank Jonathan for presenting my views so articulately.
I agree with Uday that we need to get our own house in order. These kinds of discussions are a good start.


Anonymous said...

It's worth noting that it's been a long time since assembly has actually been "how the machine works". Assembly doesn't reflect anything to do with pipelining, branch prediction, hyperthreading, virtualization, multi-level caches, or any of the other 101 tricks that hardware folk use to make our machines work. For that matter, most C usage (including the standard library) is a _long_ way from how memory actually works in modern machine.

Sean McDirmid said...

Imperative + declarative is a conceivable combination; consider putting your imperative program in an infinite loop! The effects are applied continuously and time-based ordering is eliminated as a concern.

I'm of the opinion that objects naturally have identity and encapsulated state, and are therefore support imperative forms of programming. They might support declarative programming also, but this is orthogonal.

SayCheese said...

The more a programming language interfaces with the machine, the more imperative it is.

The more it interfaces with algorithms, the more declaritive it is.

Anonymous said...

I think that the most accurate definition of declarative is:
A language L has a declarative semantics iff there is an interpretation of L in terms of sets of individual terms (aka values).

I.e., there is a model theory for the language.

Kowalski's dual interpretation of logic programming was exactly that the same formula can have an interpretation as a program and as a formal over sets of individuals.

HUSAIN said...


LOKESH said...