- "Object-oriented programming is exciting if you have a statically-typed language without lexical closures or macros." Smalltalk, Ruby, C#, Scala, and Python all have lexical closures, and their use of objects is quite exciting (lexical closures are being added to Java real soon now). All Smalltalk control structures are user-defined as well, although it doesn't have full macros. It is true that objects can be used as a stand-in for closures, so there is a little truth to this comment. On the other hand, object-oriented programming is quite popular in dynamically typed languages, so I'm not sure why Paul thinks OO is tied to static typing.
- "Object-oriented programming is popular in big companies, because it suits the way they write software." This is ridiculous. Smalltalk, Ruby, PHP, Python, and Lua (to name a few) are all quite popular but are not tied to "big companies". Lots of people like C++ too, at big and small companies. I think that Paul is showing a surprising lack of awareness of reality here.
- "Object-oriented programming generates a lot of what looks like work." Object-oriented programs are often more verbose than other styles. Partly its all the types, which means that Smalltalk, Ruby, Python etc are more concise than Java. But partly it is because OO languages encourage (require?) programmers to create modules and put in extensibility hooks everywhere, and these take up space. These hooks are called classes and methods. Haskell programs are usually concise, but are often not very extensible or interoperable.
- "If a language is itself an object-oriented program, it can be extended by users." "Overloading"? This has nothing to do with objects! What are you thinking, Paul? Overloading is about selecting an appropriate method based on its static type.
- "Object-oriented abstractions map neatly onto the domains of certain specific kinds of programs, like simulations and CAD systems." Yes, OO abstractions map very neatly into certain kinds of programs, like GUIs, operating systems, services, plugin architectures, etc. They are not good for everything, certainly, but they are good for lots of domains.
Here is a quick dictionary to translate OO names into Lispish descriptions.
- "Dynamic dispatch" is just calling an function value.
- "Polymorphism" is two different function values that have the same interface.
- "Objects" are just functional representations of data.
- "Classes" are just functions that create collections of first-class functions.
It is interesting to note that OO programs make more use of higher-order first-class functions (because all objects are collections of first-class functions) than most functional programs. This is another reason that OO is hard to grok. But Paul shouldn't have a problem with that.
As a small example, which do you think is a better approach to files? Here is the conventional approach without objects:
This is very limiting, because it requires a global read function that can understand how to read from every kind of stream! If I want to create my own kind of stream, I'm out of luck.(define (scan stream)
(if (not (at-end? stream))
(print (read stream)))
(scan (open-input-file "testdata.txt"))
Now here is the OO version:
This is very nice, because anyone can implement a function that understands the 'at-end? and 'read messages. Its immediately extensible!(define (scan stream)
(if (not (stream 'at-end?))
(print (stream 'read)))
(scan (open-input-file "testdata.txt"))
Remember Paul, that the lambda-calculus was the first object-oriented language: all its data is represented behaviorally as objects. Are you sure you aren't using objects?
7 comments:
Funny. I am busy writing a compiler and did think in the past about doing an OO style language lisp style. Very similar to the one you are showing here :)
"Haskell programs are usually concise, but are often not very extensible or interoperable."
Hi William,
I found your paper, "On Understanding Data Abstraction, Revisited," to be very clear and carefully expressed. In contrast, the statement above rings with the naive zealotry of a Slashdot post, and I know that doesn't describe your work. Could you please define extensible and interoperable in the context of that claim? Perhaps an example, too?
Hi Micheal, thanks for stopping by. I suppose I allow myself to make unsupported statements in blogs, which I would not make in a published paper. On the other hand, my sense is that Haskell programs tend to be more stand-alone. When was the last time you wrote a Haskell program and then reused part of it for another project, without copying the code? I have done this in Java, but not in Haskell. My comment is an observation/hypothesis, not a proven fact. It is a fact that Haskell types are not extensible, while OO classes are. That's the kind of thing I'm referring to.
Thanks for the reply, William!
"When was the last time you wrote a Haskell program and then reused part of it for another project, without copying the code?"
Recently, I wrote a personal and resource scheduling system in Haskell. It involved expressing business rules and manipulating queries in the relational algebra (RA), using GADTs and combinators. For the prototype, the RA code simply created a parse tree that could be walked to render sample output. For the application (and a current project), the RA code was extracted as a library and extended to generate SQL queries.
"It is a fact that Haskell types are not extensible, while OO classes are."
Are you excusing OO -- but not Haskell -- from Wadler's `Expression Problem'?
Cheers
Sorry, that's "personnel."
Cheers!
My original point is simply that object-oriented programming tends to organize problems in a way that promote extensibility, at the cost of conciseness. Its a question of default behavior. By default, classes can be extended with new data variants and methods. In Haskell doing the same thing usually requires more pre-planning. In other words, OO languages have extensibility hooks built in to the standard abstraction mechanisms, while in FP those kinds of deep extensibility hooks have to be added by the programmer. But a program without extensibility hooks tends to be shorter than one with extensibility hooks. Does that make sense?
Thanks William. Yes, you're point seems clearer to me now.
I admit that I sneer reflexively whenever I read an OOP vs. FP argument, but not as a fan of either. I use Smalltalk and Haskell in my daily work, and I derive greater understanding by comparing them directly rather than through the marred lenses of OOP and FP. I also use Erlang and OCaml which just stretches the continuum.
My concern is not with this point you've clarified, but with the old saw that OOP is extensible where FP is not. It's not that simple, and Wadler's Expression Problem just scratches the surface: Java generics and C++ templates mitigate the costs of adding an operation for typed OO languages; Haskell type classes mitigate costs in an FP language of adding a type that works with existing operations; but it's not simply the cost of writing the extension.
Preventing the extension from breaking existing or future consumers is just as problematic, and automatic hooks don't buy you a thing. You have to do the up front planning, whether in Java or Haskell. From that perspective, automatic hooks might cost you because they lead you to believe that extensibility will come easily, that you needn't plan. Even the oft lauded Smalltalk collection hierarchy suffers from this, as following the alternating implementations of #new as "self shouldNotImplement" vs. "self basicNew ..." will attest.
Post a Comment