Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very much true. During my education I was massively disappointed in Prolog, because the programming process was something like this:

1) write it declaratively 2) figure out that your program will take days to complete 3) add ugly procedural hacks (cuts) to make it more efficient

To an extent, you have this process in any language (write->measure->optimize) but typically not in a way that it forces you at gunpoint to rape the paradigm you're working in.



An excellent point - Prolog fans (usually those who never actually programmed anything) would describe it as a declarative language.

However, any "real" Prolog program that I ever saw was really using it as a slightly odd procedural language.


It doesn't fail on all occasions. Our research group wrote a robust natural language parsing system written in Prolog (with some C extensions). The (aptly named) unification grammar, lexicon, and most of the productive lexicon are written in a declarative manner.

This really paid off when in a new research project, which goal is to write a sentence realizer for the same system. With nearly no modifications we could reuse the grammar, lexicon, and productive lexicon. Both the parser and the sentence realizer use the same grammar and lexicon now.

I understand that this may seem somewhat trivial, as it may seem that the lexicon and grammar are plain data. However this is not true:

- The grammar is written as a declarative manner, where goals are mostly operators that manipulate attribute-value structures. These rules are later compiled to plain Prolog terms via term expansion (DCG-like) for efficiency. - You don't want to perform some unifications immediately, even when two terms are unified. Most Prolog implementations offer blocked goals, where a goal is blocked until a variable becomes instantiated.

However, I am the first to admit that Prolog makes some classes of problems trivial (unification grammars, parsers). There are also many things that you do not want to do in Prolog, because it is a waste of time, or very inefficient. For instance, in our system the following components are implemented in C or C++:

* Finite state automata for quick lookup of subcategorization frames.

* Part-of-speech tagger for restricting the number of frames for each word before parsing.

* N-gram models that are used as a feature in fluency estimation.

* Tokenization transducer.

* Bit arrays (comparable to Bloom filters) for excluding useless paths in parsing.

Conclusion: use the right tool for the job. Unification, structure sharing, and (some) pattern matching are cheap and easy to use in (WAM) Prolog. Most other things are prohibitively expensive and clumsy in Prolog.


Most programmers have a hard time learning to think in the Prolog way. That's why there is so much bad Prolog code around. But the same can be said of Lisp code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: