
CallbyName Specialization of Functional Logic Programs
Maria Alpuente Frasnedo
Partial evaluation is a method for program specialization based
on fold/unfold transformations \cite{JGS93}. Partial evaluation of functional
programs uses only static values of given data to specialize the program.
In logic programming, the socalled static/dynamic distinction is hardly
present, whereas considerations of determinacy and choice points are far more
important for control \cite{Gal93}. In this paper, we formalize a twophase
specialization method for a nonstrict functional logic language which makes
use of normalizing lazy narrowing to specialize the program w.r.t. a goal.
The basic algorithm (first phase) is formalized as an instance of the general
framework for the partial evaluation of functional logic programs of
\cite{AFV96}, using lazy narrowing. A postprocessing renaming scheme (second
phase) for obtaining independence is then described and illustrated on the
wellknown matching example. We show that our method preserves the lazy
narrowing semantics and is able to achieve the same transformational effect as
the driving mechanism used in supercompilation. We also show how the inclusion
of simplification steps in narrowing derivations can greatly improve control
during specialization, which we illustrate with some examples.
Keywords: integration of functional and logic programming, term rewriting
systems, lazy narrowing, partial evaluation.

Tabulation of Functions in Logic Programs
Jonas Barklund
In addition to improving termination properties of deductive
database queries, `memoing' or `tabulation' allows many problems
to be solved by simpler and more elegant programs without
comprimising efficiency.
We argue that by focusing on tabulation of those predicates that
compute functions, we only need a very simple machinery. In
addition it is easy to reclaim memory by deleting individual rows
of tables. Furthermore, there is a large range of data structures
that can be used to represent tabulated functions, e.g., linked
lists, AVL or red/black trees, hash tables, arrays or `sliding
arrays'. This means that a programmer can control the efficiency
of the tabulated program quite precisely. We have an
implementation of these ideas added to Sicstus Prolog, formally
based on OLDTFresolution as presented in a report that can be
obtained.

Constraints in functional logic programming
Alexander Bockmayr
The two fundamental concepts in functional logic programming are
functions and relations (or predicates). Constraints are relations over
some specific domain of interest, which occur in many applications and
therefore require special support.
Constraints can be defined over symbolic domains like trees or lists or
over numerical domains like real, rational or integer numbers. From the
viewpoint of applications, one of the most important constraint systems
are finite domain constraints, where the variables take their values in
finite sets of natural numbers.
Finite domain constraints are usually solved by local consistency
techniques from artificial intelligence, combined with enumeration.
We show how to get stronger propagation by solving relaxations of a finite
domain problem and by tightening these relaxations in a branchandcut
framework.

FunctionalLogic Knowledge Bases for Engineering
Harold Boley
Experience with the functionallogic language RELFUN has been
accumulated by specifying knowledge bases for engineering
applications. Three of these are ELEMENTS (physical/chemical
properties of the atoms), RTPLAST (engineering properties of
recyclable thermoplastics), and muCAD2NC (CADtoNC transformation
for production planning). The ELEMENTS knowledge base, discussed
here, represents each atom by a nested relational fact and provides
access functions for computing atom groups and tables.
RELFUN's functions, like relations, can be nonground and nondeterministic;
its relations, like functions, can (eagerly) evaluate their arguments and
permit a higherorder notation.
Both kinds of operators are defined by valued clauses, merging
1. Horn clauses and 2. unconditional equations to
3. conditional equations
(PROLOG's syntax is augmented by the `returns' symbol ``&'',
preceding value premises):
 pattern : condition1,..., conditionN.
(for N=0: pattern.)
 pattern :& value.
 pattern : condition1,..., conditionN & value.
(for N=0: 2.)
Constructor applications are explicitly distinguished from
definedoperator applications by square brackets vs. parentheses.
For example, RELFUN permits both relational and functional
representations of knowledge about the maximal numbers of electrons
used by atoms' shells:
Relational specification: 
Functional specification: 
maxelec(1,2). 
maxelec(1) :& 2. 
maxelec(2,8). 
maxelec(2) :& 8. 
maxelec(3,18). 
maxelec(3) :& 18. 
maxelec(Sh,32) : <(3,Sh,6). 
maxelec(Sh) : <(3,Sh,6) & 32. 
While the relation suggests symmetric use, with higher
risk of inversion errors (through free ``<'' calls), as in
maxelec(Res,32), the function suggests directed use,
with risky inversions made more obvious
(via the ``is'' primitive),
as in 32 is maxelec(Res).
The corresponding paper can be obtained from
here .

Directional Types for Integrating Functions in Logic Programs
Johan Boye and
Jan Maluszynski
Directional types provide a simple way of reasoning about
(functional) logic programs. The idea of directional type
is to associate an input assertion and an output assertion
with each predicate of the program. The directional type
is interpreted as an implication: provided that the input
assertion is true at call, then the output assertion holds
for the call instantiated with any computed answer substitution.
The programs considered in this presentation are definite
programs where some function symbols represent externally
defined functions. The declarative semantics of such a
functional logic program can be given by standard techniques
used in equational logic programming. However, the computational
techniques based on narrowing are not applicable, since the
functions are black boxes which can only be used for reduction
of sufficiently instantiated functional calls to the respective
normal forms. The operational model of the language is then
defined as an extension of the SLDresolution with residuation
of insufficiently instantiated functional calls. The question to
be addressed is for which programs and goals the operational
model is complete. A sufficient condition for this can be
formulated in terms of directional types.

Towards a Logical Foundation for the Parallel Implementation of
Declarative Languages
Manuel M. T. Chakravarty
The calculus $\mathcal D$ provides a formal basis for the
implementation of declarative languages distinguishing between
coordination and computation. It includes support for deep guards,
committed choice, and parallel search. Furthermore, $\mathcal D$
explicitly distinguishes between boxed and unboxed values as well as
between eager and lazy evaluation. A natural mapping of $\mathcal D$
into formulae of linear logic provides a logical semantics that can
be shown to induce an algebraic and a parallel operational
semantics. The latter constitutes a substantial step towards a
parallel abstract machine implementing $\mathcal D$.

Proofs as Programs
John N. Crossley
Since Gentzen's work in the 1940s it has been possible to extract algorithms
from proofs in formal logic. The CurryHoward isomorphism provides a convenient
but longwinded linear representation of such proofs. In this talk we introduce
reduced CurryHoward terms which
1.look (almost) like LISP programs
2.are reusable and
3.can be obtained directly from ordinary (constructive) mathematical proofs  and in the same manner. This last point means that instead of having to write out the mathematical proof in full detail in formal logic we are now, we believe, able to construct programs from (new and preexisting proofs as quickly as constructing the proof.
We gave, as an example, a derivation of a program for the predecessor function and we have also extended the technique to the Euclidean algorithm. (The resulting programs can be written in one normal length line.) The techniques seem clear and simple but need further exploration.
Finally we remark that since proofs are the most reliable objects we possess, this technique guarantees that the programs we write are correct in the sense that they meet their specifications.

Parallel Narrowing with simplification
Rachid Echahed
We have presented a new narrowing strategy for
constructorbased weakly orthogonal rewrite systems. The main idea of
our narrowing strategy is the parallel evaluation of necessary sets of
redexes. This leads to a generalization of Sekar and Ramakrishnan's
work on rewriting to narrowing. Our narrowing strategy is also a
conservative extension of an optimal narrowing strategy, needed
narrowing, to weakly orthogonal rewrite systems. Furthermore, our
strategy is the only known narrowing strategy for possibly
nonterminating and overlapping rewrite rules which evaluates ground
terms without any nondeterministic choice.
(this is a joint work with Sergio Antoy and Michael Hanus)

Narrowingdriven Partial Evaluation of Functional Logic Programs
Moreno Falaschi
Languages that integrate functional and logic programming with a
complete operational semantics are based on narrowing, a unificationbased
goalsolving mechanism which subsumes the reduction principle of functional
languages and the resolution principle of logic languages. Formal methods of
transformation of functional logic programs can be based on this
wellestablished operational semantics. In this paper, we present a partial
evaluation scheme for functional logic languages based on an automatic
unfolding algorithm which builds narrowing trees. We study the semantic
properties of the transformation and the conditions under which the technique
terminates, is sound and complete, and is also generally applicable to a wide
class of programs. We illustrate our method with several examples and discuss
the relation with Supercompilation and Partial Evaluation.
To the best of our knowledge, this is the first formal approach to
partial evaluation of functional logic programs.
Keywords: integration of functional and logic programming, narrowing,
partial evaluation, term rewriting systems, equational unification.

An Abstract Machine for Functional Logic Programming
Michael Gollner
Functional logic programming based on the calculus of normalizing
narrowing unifies the mechanisms of logic and functional
programming paradigmas. A crucial point for the success of this
integration is, however, the availability of an efficient
realization. Indeed the efficiency of the Warren Abstract Machine
was the main reason for the breakthrough of logic programming.
Until now a comparable efficient abstract machine was not
available for functional logic programming based on the calculus
of normalizing narrowing.
We introduce such an abstract machine and its principles.
Heart of this machine is the presented incremental normalization
strategy called "normalization in situ".
With this strategy, an incremental reject rule and a deterministic
verification of conditions (with possible extra variables) during
rewrite steps an efficient operationalization of normalizing
narrowing is made possible.
By evaluating a prototype it is shown that normalizing
narrowing can even be faster than Prolog with cuts.

Declarative Order Sorted Programming
Juan Carlos Gonza'lez Moreno
"Decalrative Order Sorted Programming" is the english
translation of a Spanish project in which we try to build Object
Orientation inside the Functional paradigm. To do this, we began
including in that framework the main static aspects of traditional OOP
(class, inheritance and state). This has been done by means of a purely
algebraic framework based on the idea of representing record as
ordersorted feature terms. In this framework we can express a novel
mechanism called genetic inheritance which allows to deal with multiple
inheritance in a more flexible way than other proposals; also we are
able to represent Abstract Data Types with free constructors simulating
it with the attributes we have and using a particular kind of class to
represent constants.
In which respect to the dynamic aspects (methods, messages, ...) our
intention is to represent the behaviour as a set of transformation rules
involving not only the expression that usually represent the goal in
functional programming, but also what we call the Object Soup where we
are going to capture the states and their modifications. This is an
ongoing work that is also related to a more general approach in which we
want to integrate declarative programming (not only functional but also
logic) with ObjectOrientation Programming.

Concurrency Is Logical  A RISC Approach Towards Calculus
Design
Yike Guo
Declarative programming systems are logic programming
systems with varying underlying logics that
determine the semantics and computational model of the
system. One of the important feature of declarative programming is
that the computational model of a language is based on the
deduction model of the logic, which provides the inherent concurrency.
Designing a new declarative programming language is therefore always
companied by designing a process calculus abstracting the
properties of the computational model. This causes the `' inflation''
of process calculi which makes hard for any attempt of rationalising
various models: comparing their properties, combining their features
or predicating their performances. It is therefore becoming
increasingly important to develop a uniform formalism which can
specify features or behaviour of a wide class of declarative
computation models, can reason about the properties and
can help the development of efficient implementation of these models.
An obvious choice is to adopt some existing process calculi such as
$\pi$ calculus. However,one of the important drawback of this
approach is that the lack of uniform meta theory of process calculi
again obstacles a proper abstraction.
We argue that the goal of a uniform formalism for abstracting
declarative computation model can be partially achieved by
adopting a Logical Process Calculus , which is a fragment of
intuitionistic linear logic.
We show that a wide range of declarative computational models
can be modelled uniformly by linear deduction presented in a Gentzen style
sequent calculus. Formulas in a sequent are regarded as concurrent processes.
Logic operators (connectives and quantifiers) become combinators of these
processes. The operational meaning of these combinators is given by the deduction rules of the logic.
We can therefore presents a natural way to code the essential behaviour of
concurrent computation, such as process creation, parallel composition, choice,
local information hiding, synchronisation and message passing into
logic. In the talk, we also present the specifications of
various concurrent computation models including data flow,
graph reduction, residuation, actors and the Oz computation model
in this logical process calculus.

Operational Semantics of Concurrent Functional Logic Programming
Michael Hanus
We specify an operational semantics which combines the most important
operational principles of functional logic languages, namely
residuation and narrowing. Narrowing combines efficient evaluation
principles of functional languages with the problemsolving
capabilities of logic programming. Residuation restricts this
combination but adds the possibility of concurrent computations. We
show that it is possible to combine both principles in a coherent way
such that this combination offers features of all three programming
paradigms.

HigherOrder Lazy Narrowing Calculus
Koji Nakagawa, Tetsuo Ida , Taro Suzuki
A computation model for a higherorder functional logic programming
language is presented. It is defined as an inference system called HLNC
(Higherorder Lazy Narrowing Calculus). HLNC is defined for a
restricted class of a rewrite system, called lambdaTRS, that is regarded
as an abstract higherorder functional logic program.
The language of a lambdaTRS allows writing programs with lambdaterms
of long etanormal form using the higherorder constructs of the
lambda calculus. HLNC is based on a firstorder lazy narrowing
calculus LNC of
Okui, Middeldorp and Ida, and hence inherits the property of lazy narrowing
of LNC.
Together with HLNC we show the following basic properties.
 A combined system of a leftlinear betafree confluent lambdaTRS,
and the betareduction is confluent. A betafree lambdaTRS is a lambdaTRS,
in which the lefthand side of every rewrite rule does not contain
a betaredex or a term of the form (X t_{1} ...
t_{n})
with X free variable.
 The completeness of HLNC with respect to normalized strict
solutions holds for a leftlinear betafree constructor lambdaTRS.

Declarative languages for parallel computing
Martin Köhler
The seminar presents research activities of the Imperial College /
Fujitsu Parallel Computing Research Centre, with contributions from
John Darlington, Yike Guo, H. Wing To and Jin Yang.
We present the Structured Coordination Language (SCL), which
expresses our view of parallel computing as computation plus
coordination. SCL combines the notion of distributed data structures
with a finite but extensible set of combinators, which are defined by
functions or axioms. Each combinator is associated with a parallel
behaviour when applied to the distributed data. Together they define a
highlevel layer of coordination for parallel computation, into which
procedures from conventional sequential languages are embedded.
The basics of an existing compiler for the concrete instantiation
of SCL using Fortran as base computation language are briefly
introduced. The combinators are compiled to MPI with Fortran77.
The declarative foundation naturally allows the definition of
applicationspecific parallel abstract datatypes, and we present an
extended example of distributed irregular meshes.
Finally, the extension of our methodology by distributed shared
objects is outlined: it should enable the flexible integration of
largegrain tasks as they can be found in multidisciplinary
optimisation.

The Operational Semantics of Escher
John W. Lloyd
Escher is a declarative, generalpurpose programming language which integrates
the best features of both functional and logic programming languages.
It has types and modules, higherorder and metaprogramming facilities,
and declarative input/output.
The main design aim is to combine in a practical and comprehensive way
the best ideas of existing functional and logic languages, such as Gödel,
Haskell, and $\lambda$Prolog.
In this talk, I describe the operational semantics of Escher, which is
based on a rewriting computational model.

Eden  A "Declarative" Language for Parallel Programming
Rita Loogen
Eden is a functional language with explicit process definitions and implicit
communication, suitable for parallel as well as for concurrent programming in
general. It is tailored for parallel systems with distributed memory.
More precisely, Eden extends the lazy functional language
Haskell by new syntactic constructs for explicitly
defining processes. These processes communicate by the exchange of values via
communication channels modelled by lazy lists. Communication is asynchronous
and onetoone. The kernel part of the language is based on the wellknown
approach of streambased computation and communication.
In addition Eden incorporates special concepts for the efficient treatment of
general reactive systems, i.e. systems which maintain some interaction with
the environment and which may be timedependent.
The dynamic creation of reply channels simplifies the generation of
complex communication topologies and increases the flexibility of the language.
Predefined nondeterministic processes MERGE and SPLIT are used to model
manytoone and onetomany communication in process systems.

Denotational Abstract Interpretation of FunctionalLogic
Programs
Julio Marin~oCarballo
Due to the interaction between the functional and logic features, abs
tract
interpretation of FL programs is more complex than for functional or
logic
programs alone. For instance, eager narrowing of strict redexes can
speed
up narrowing by avoiding recomputation, but in order to do this safel
y,
additional requirements regarding deterministic execution have to be
met.
In turn, deterministic execution depends on sharing and groundness
information. But even more interesting is the fact that with lazy nar
rowing
those two properties depend on strictness.
This suggests a twostage process where strictness is used to special
ize the
source program (strictification) and then groundness and sharing are
approximated for this new program.
We overview our workplan for the abstract interpretation of FL pr
ograms
via abstraction of its denotational semantics. Some of the advantages
of the
approach are that we can abstract away from operational detail, i.e.
the
results apply to every operational semantics compatible with the deno
tational
one, and that when the semantics is coded using a standard metalangua
ge,
safety proofs become much easier.
With all this in mind we have defined a correct analysis for appr
oximating
several degrees of strictness by approximating Babel's standard lazy
denotational semantics. Our approach extends similar work for functio
nal
programs providing a treatment of unification consistent with the
interpretation of constructors.
More operational properties, like groundness, need a denotational
semantics based on computed answers, whose details we are working at
the
moment. Essentially, FL programs denote functions from constraints to
sets of
constraints, where the domain of constraints is naturally sorted by
entailment.

Call by Need Revisited
Aart Middeldorp
The following theorem of Huet and Levy forms the basis of all
results on callbyneed computations in orthogonal term rewriting
systems: every term not in normal form contains a needed redex and
repeated contraction of needed redexes results in the normal form,
if the term under consideration has one. Here a redex is called
needed if it has to be evaluated in order to reach the normal form.
In the talk I generalized this theorem to computations to rootstable
form and I argued that the resulting notion of rootneededness is
more fundamental than (other variants of) neededness when it comes
to infinitary normalization. Unfortunately I didn't have time to
explain our results on decidable approximations to rootstability
and (root)neededness. The talk was based on joint work with Irene
Durand of the University of Bordeaux.

HigherOrder Narrowing with Definitional Trees
Christian Prehofer
(Joint work with Michael Hanus)
Functional logic languages with a sound and complete operational semantics are mainly based on
narrowing. Due to the huge search space of simple narrowing, steadily improved narrowing
strategies have been developed in the past. Needed narrowing is currently the best narrowing
strategy for firstorder functional logic programs due to its optimality properties w.r.t. the length
of derivations and the number of computed solutions. In this paper, we extend the needed
narrowing strategy to higherorder functions and lambdaterms as data structures. By the use
of definitional trees, our strategy computes only incomparable solutions. Thus, it is the first
calculus for higherorder functional logic programming which provides for such an optimality
result. Since we allow higherorder logical variables denoting lambdaterms, applications go
beyond current functional and logic programming languages.

THE LOGIC OF VESPER
J. A. Robinson and
Jonas Barklund
VESPER combines a typeless functional programming language
(modelled on pure LISP) with a version of Horn clause
resolution. Its logic is captured in a single simplification
function S which is designed to be applied in parallel to almost
every subexpression of a given expression. VESPER computations
consist of sequences E0, E1, ..., En of expressions in which,
Ei+1 is S(Ei) for each i, i = 0, ..., n1, and S(En) = En.
We plan an implementation of VESPER in which expressions are
(represented as) graphs whose nodes are processors of identical
design. Every clock cycle each processor simultaneously
applies the function S to (the subexpression represented by)
itself, allocating unused processors from a heap thereof, if
necessary, to represent new subexpressions required by the S
transformation.
Examples computed by a simulation of this massively parallel
VESPER machine show that there is a very large amount of
parallelism inherent in this hybrid typ of logic computation.

Declarative Semantics for Functional Logic Languages
Mario RodriguezArtalejo
The talk addresses the problem of defining a semantics for functional logic
programs, given as sets of constructor based conditional rewrite rules. Our
main aims are:

Declarativeness: the meaning of a program $\cal R$ should reflect
what statements can be deduced form $\cal R$ in a suitable logic.

Compatibility: the semantics should be compatible with established
semantics for relational logic languages and functional languages.

Applicability: the semantics should be helpful to reason about
programs.
In particular, it should help to prove soundness and completeness results for
narrowing calculi.

Adaptability: the semantics should be adaptable to extensions, in
order to account for different computation observables (such as computed answer
substitutions) and/or language enrichements (such as higherorder functions,
type systems and constraint systems).

The Oz Programming Model
Gert Smolka
The Oz Programming Model (OPM) is a concurrent
programming model that subsumes functional and
objectoriented programming as facets of a general
model. This is particularly interesting for
concurrent objectoriented programming, for which
no comprehensive and formal model existed until
now. There is a conservative extension of OPM
providing the problemsolving capabilities of
constraint logic programming. OPM has been
developed together with a concomitant programming
language Oz designed for applications that require
complex symbolic representations, organization
into multiple agents, and soft realtime control.
An efficient, robust, and interactive
implementation of Oz is freely available.

Transparent Distribution in (the functionallogic language) Oz
Seif Haridi and
Peter Van Roy
Distribution is a major unsolved problem in computer science today.
Computers are highly networked, yet programming applications that run
across the network is difficult or impossible. We propose a model for
distributed programming that will let us program networks of computers
as easily as single computers. The model provides transparent
distribution, which means that the same program can run on a
centralized or a distributed system, differing only in performance.
For network efficiency, the model is careful to present a programmer
view that allows full control over communication patterns. The model
is based on a conservative extension to Oz, a concurrent constraint
language developed at DFKI in collaboration with SICS. Oz has three
key ideas that are crucial for the model's success. First, Oz is an
inherently concurrent language, which is necessary since
distributed systems are inherently parallel. Second, Oz is a
higherorder language, which gives mobile procedures that behave in
the same way no matter where they are executed. Third, Oz makes the
distinction between stateful and stateless computation.
Stateful computation is the basis of an object system that models
stationary servers and mobile agents as facets of the same concept.
Stateless computation models the replication and caching of data
between sites.
In this talk we present the basic concepts of Distributed Oz, how
it is built, and how to program in it. We show how functionallanguage
features (lexical scoping and higherorderness) and logiclanguage
features (logic variables, constraints) work together to give a
powerful platform for distributed programming.

Narrowing for QuasiReductive Rewrite Systems
Andreas Werner
Narrowing allows us to solve equations in equational theories defined
by rewriting systems. It is also the operational semantics of various
logic and functional programming languages. We present completeness
results for narrowing strategies for quasireductive and confluent
conditional term rewrite systems. Rules of such systems may even have
extra variables in the righthand sides of the rules. Futhermore,
criteria are given that allow us to exclude narrowing steps in the
righthand side of an equation.

Mode Analysis for Functional Logic Programs
Frank Zartmann
Strictness analysis is a well known technique used in compilers
for optimisations of lazy functional languages. Almost all
strictness analysis' are based on a denotational semantics.
We give an overview over these denotational semantics' and show
by an example that these semantics' are not suitable for an
analysis of lazy functional logic language, especially when
we are considering sequential implementations. In these semantics'
a term whose evaluation does not terminate and a term
whose evaluation stops with a failure have the the same semantic
value. We propose a semantics, which maps those terms into
different semantic objects. We discuss the use this semantics
for an analysis.