I've already voted it up, and don't want to violate social conventions here, but consider this comment another vote for the LtU thread mentioned above. A fantastic and educational argument by people who know what they're talking about. If you're interested in these sorts of details it's good reading, no matter what side of the argument you happen to be on.
I think Wand's result is straw man if used against fexprs - and Wand really didn't used it that way. Essentially, he says, with fexprs, there is no source => source transformation that can work in all contexts. Ok - but I am not interested in such transformations. Very few programmers are. Whole practical point of these transformations are that they can help to implementers to implement optimizing compilers.
But - if one wants source -> source transformation that works in ALL contexts, whatever the reason, he can limit his own use of fexprs - and hence, allow various degrees of such source transformations. For example, he can use fexprs under same limitation macros are used, and in that case, fexprs can be "compiled away" just like macros can. But - he can do it on his own, in some or all of his projects, or even only in some parts of the projects. There is no need to make collective decisions of that kind. Again, it is consistent with idea of Lisp, in which borderline between language designer and programmer is blurred, or at least, it was imagined that way initially.
Shutt has interesting and novel approach, and his approach is independent of the severity of overshadowing of the variables problem he addressed. And how severe is that problem? Not really. It is actually the same problem that exists with macros, and techniques used in CL or Scheme (gensyms, hygiene) for macros can work as well with fexprs.
There's a similarity in that neither fexprs nor lazy functions evaluate their arguments up front, but lazy functions will eventually, if they run and access that argument at all, get an evaluated version of it. Fexprs can get the actual raw unevaluated argument and poke at it, so if you call (foo (+ 1 2)), foo gets the syntax-tree (+ 1 2) if it's an fexpr, while if it's a function, lazy or otherwise, it will only ever see the value 3.
...so, basically it gives the same ease of understanding program flow that lazy evaluation offers, but without the distracting complexity of referential transparency? Okay, then.
So the value of your words depended on evaluating them in a sarcastic context. This lack of referential transparency made it difficult to reason about them at compile-time.
In this way, they are similar to C#'s expression trees, whereby lambdas passed as an argument where the parameter is of type Expression<T> receive an expression tree describing the lambda, rather than an actual callable object.
(GNU Guile maintainer speaking. No, we don't have fexprs any more. Yes, we are finally getting back to the level of other Schemes.)