E. Ellen Teske, U. Waterloo, Canada.

An important problem in public-key cryptography is the discrete logarithm problem: given a finite cyclic group generated by the group element g, and a group element h, find x such that g^x =h. An efficient method to solve this equation is Shanks' baby step-giant step method. In our talk we discuss the algorithm and recent improvements on its worst-case performance. Moreover, we solve the problem of minimizing its average running time if we know the probability distribution for the unknown x.

S. Galbraith, U. Bristol.

The XTR public key cryptosystem is a recent invention of Arjen Lenstra and Eric Verheul. Rather than being a new cryptosystem it is a a new representation of a certain subgroup of the multiplicative group of a finite field. The cryptosystem itself is just the usual digital signature standard (DSS). The XTR representation gives improvements to the computation time and storage requirement compared with the more traditional representations of subgroups of finite fields.

Principles and Standardization into JPEG-LS

G. Seroussi, HP Labs Palo Alto.

LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of JPEG-LS, the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb-type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEG-LS.

(LOCO-I is joint work with Marcelo Weinberger, HP Labs, and Guillermo Sapiro, U. of Minnesota, formerly with HP Labs.)

W. Mao, HP Labs Bristol.

A convertible undeniable signature is issued as an undeniable signature which can provide good privacy service in its early lifecycle. Later when necessary, it can be converted into an ordinary signature by a designated party and thereafter assures good accountability just as an ordinary digital signature does. We propose an RSA-based convertible undeniable signature which, in its stage of undeniable signature verification, operates in modulo the square of a standard RSA modulus and thereafter becomes a standard RSA-based undeniable (convertible) signature. This solves an open problem for designing efficient RSA-based undeniable signatures using the general form of composite modulus and achieves an efficiency for such schemes suitable for practical applications.

(This is joint work with Kenny Paterson, HP Labs)

S.L. Ng, RHBNC.

A secret sharing scheme is a method of dividing a secret among a set of participants in such a way that only certain subsets of participants, called the authorised subsets, can reconstruct the secret. The collection of authorised subsets is called the access structure. In an ideal secret sharing scheme, the access structure is uniquely determined by its minimal sets. In this talk we discuss some recent results in the characterisation of these sets. The relationship between matroids and ideal access structures is well-known and we will be discussing most of the results in terms of matroids.

(This is joint work with Mike Walker, Vodafone)

T. Garefalakis, U. Toronto, Canada

We present a generalization of the Weil pairing that, in special cases, is equivalent to the Tate pairing. The applications of the pairing to cryptography are both constructive and distructive.

P. Nguyen, ENS, France

The hidden number problem was introduced in 1996 by Boneh and Venkatesan to study the bit-security of the Diffie-Hellman key exchange in prime fields. The problem is the following: recover a hidden number \alpha in a prime field \F_p such that for many known random t_i \in F_p, approximations of \alpha t_i are known.

Recently, variants of the hidden number problem have been studied to obtain stronger security results, but also attacks on DSA-type signature schemes under the assumption that the random nonce (used at each signature generation) is partially known. In this talk, we will survey results on the hidden number problem and its variants.

L.C Paulson, U. Cambridge

Secure Electronic Transactions is possibly the most complex security protocol in use, with over 1000 pages of official documentation. Lessons will be reported from work on its first phase: Cardholder Registration.

(This is joint work with Giampaolo Bella, Fabio Massacci and Piero Tramontano)

R. Jozsa, U. Bristol

Amongst the most remarkable successes of quantum computation are Shor's efficient quantum algorithms for the computational tasks of integer factorisation and the evaluation of discrete logarithms. In this talk we review the essential ingredients of these algorithms and draw out the unifying generalization of the so-called abelian hidden subgroup problem. This involves an unexpectedly harmonious alignment of the formalism of quantum physics with the elegant mathematical theory of group representations and fourier transforms on finite groups. Finally we consider the non-abelian hidden subgroup problem mentioning some open questions where future quantum algorithms may be expected to have a substantial impact.

H. Barnum, U. Bristol

I give an introduction to the best-known class of quantum error-correcting and error-detecting codes, stabilizer codes. I then describe a protocol which assures the recipient of a quantum state (or classical message) that it has come from a sender with whom he has previously shared secret key. The protocol works by encoding the state in a secret quantum error-detecting code randomly chosen from a special class associated to an algebraic curve over a finite field. Its security is "information-theoretic" or "unconditional", that is, it is based on shared secret key and not on any assumed limitations on the adversary's computational power. The protocol is more secure than classically possible when used to authenticate classical messages, and as with the best information-theoretically secure classical protocols (such as the Carter-Wegman universal hash-function protocols), security has very weak dependence on message length so that the key used may be much shorter than the message.

S. Galbraith, U. Bristol

This talk will give a brief introduction to the ideas behind elliptic curve cryptography (ECC). ECC is a public key cryptographic technique which can provide increased security for reduced bandwidth, power consumption or computing time. As such it is becoming increasinly important in constrained environments such as PDA's.

In this talk the elliptic curve group law will be discussed. Algorithms for the EC-DLP such as Baby-Step/Giant-Step, Pollard rho, MOV/FR and SmartASS will also be described. A brief introduction to how systems can actually be constructed via Schoof's algorithm will also be given.

This talk is a repeat of that given at the LMS Durham Symposium on Computational Number Theory.

N. Smart, U. Bristol

This talk will expand on the previous one and go into more details from a cryptographic perspective. ECC is a public key cryptographic technique which can provide increased security for reduced bandwidth, power consumption or computing time. As such it is becoming increasinly important in constrained environments such as PDA's.

This talk will look at a variety of protocols which are associated with ECC, focusing on those which are more widely known and deployed. In particular the talk will discuss EC-DH, EC-DSA, MQV and Implicit Certificates. The talk will then go on to discuss implementation issues and standardization efforts.

This talk is a repeat of that given at the LMS Durham Symposium on Computational Number Theory.

J. Jurjens, U. Oxford

We present a notion of secrecy (following the standard approach of Dolev and Yao (1983)) in a formal model which is preserved under refinement and composable. Both these properties are considered to be useful and are notoriously difficult to achieve for security properties (``refinement paradox''). As an example for a use of this notion, we analyse a variant of the Internet security protocol TLS (current successor of SSL) proposed at INFCOM'99, uncovering a previously unpublished flaw.

N. Howgrave-Graham, IBM T.J. Watson Research Labs

The talk will start by recapping the history of achieving provable semantic security with RSA-type cryptography in the standard (rather than RO) model (i.e. not OAEP). This will start with the notion of probabilistic encryption as introduced by Goldwasser and Micali, and end with the recent cryptosystem proposed by Paillier. We then show two different ways of extending Paillier's work. Firstly we show that the semantic security can be based on a computational rather than a decisional assumption, and then we show that a related cryptosystem may be made considerably more efficient under a slightly different (but plausible) assumption. This is joint work with Dario Catalano, Rosario Gennaro and Phong Nguyen.

C. Cocks, Cheltenham

Cryptographic systems in which a user can choose his public key to equal his identity (email address for example) were first proposed by Shamir in 1984. One advantage of such systems is that they can permit secure communication without the need for directories of public keys. The talk will discuss progress in the development of identity based cryptosystems.

D. Gollman, Microsoft Research Cambridge

In the attempt to capture the concept of entity authentication in a precise fashion, most researchers have settled on correspondence properties. This holds for the cryptographic world, see e.g. the work inspired by Bellare and Rogaway, and for protocol analysis using formal methods. I will trace the history of correspondence properties, query whether they properly capture entity authentication at all, and use the example of two International Standards on entity authentication to show that this term does not even have single 'intuitive' interpretation. I conclude that some correspondence properties could be viewed as the formalisation of one of those intuitive aspects of entity authentication, whilst others do not have any familiar interpretation and only emerged for the sake of verification.

C. Mitchell, RHBNC

This talk is concerned with security issues which arise when a mobile agent is required to sign transactions on behalf of a user. The simplest solution would be for the mobile agent to be given the user's private signature key. However, this is undesirable since the mobile agent may run on platforms over which the user has no direct control. A number of authors have previously considered this problem, resulting in the notion of an undetachable signature. Such a scheme gives the agent only the ability to sign certain types of transaction on behalf of the user, reducing the risk to the user's signature capability. We consider a 'pragmatic' alternative to the use of undetachable signatures, and show that this alternative has potential practical advantages.

The rest of the talk is concerned with a related problem, namely where the user distributes signing capability over a number of agents. This deals with a threat not properly dealt with by the undetachable signatures approach, since even if an agent has a very restricted signing capability, this capability might still be misused, e.g. to authorise sub-optimal transactions. This situation provides an application for a threshold signature, a means of computing a signature which requires the co-operation of a collection of agents of a specified size (from amongst the set of all agents). We consider a simple alternative to the use of threshold signatures which, in certain cases, may have practical advantages. Finally, we consider how the notions of threshold signatures and undetachable signatures can be combined to give an 'undetachable threshold signature'. We present a practical example of such a scheme. Such a scheme allows a specifically restricted signing power to be distributed across multiple agents.

The work reported in the talk has formed part of the Software Based Systems area of the Core 2 Research Programme of the Virtual Centre of Excellence in Mobile and Personal Communications, Mobile VCE, www.mobilevce.co.uk, whose funding support, including that of the EPSRC, is gratefully acknowledged. More detailed technical reports on this research are available to Industrial Members of Mobile VCE.

The content of the talk is joint work of Niklas Borselius, Aaron Wilson and Chris Mitchell.

P. Gaudry, LIX Ecole Polytechnique

Counting points is a difficult task when one builds a cryptosystem based on the discrete logarithm problem in elliptic curves. Before two years ago, the only algorithms to do it were variants of the algorithm of Schoof. In 1999, Satoh showed that p-adic methods could be used in this context and he improved the theoretical complexity of this problem when the characteristic of the base field is small. Two years later, his method has been improved in several ways and has become of practical use in cryptography.

Firstly, in this talk, we will recall Satoh's algorithm as it was originally designed. Then we will explain how Vercauteren modified it to save memory, and the similarities and differences between his method and another one due to Mestre. We will then give some hints on how Mestre's algorithm can be extended to curves of genus 2. And finally, we will say a few words about a completely different technique, due to Kedlaya, which seems to be well designed for high genus curves.

M. Robshaw, RHBNC

By reflecting on proposals for the Advanced Encryption Standard, we illustrate a wide variety of approaches and techniques in the design of modern algorithms.

J. Yan, U. Cambridge

In Crypto'99, Boneh and Franklin proposed an efficient public key traitor tracing scheme, which was believed to be able to catch all traitors while not accusing any innocent users as long as the number of traitors is at or below a threshold k. Assuming that Decision Diffie-Hellman problem is unsolvable in $G_{q}$, Boneh and Franklin proved that a decoder cannot distinguish invalid ciphertexts from valid ones. However, our novel pirate $P_{3}$ manages to make some invalid ciphertexts distinguishable without violating the assumption, and it can also frame innocent user coalitions to fool the tracer. Neither the Boneh-Franklin's single-key nor arbitrary pirate tracing algorithm presented in their Crypto'99 paper can identify all keys used by $P_{3}$ as claimed, and instead, it is possible for both algorithms to catch NONE of the traitors who contribute keys to $P_{3}$.

"Traffic analysis" is suggested as a simple way to defeat some (black-box) traitor tracing schemes in general: when a tracing algorithm is running, it must present certain "traffic pattern" distinguishable from normal traffic, so an intelligent pirate box may detect and exploit the same pattern to disturb/defeat the tracing algorithm.

F. Vercauteren, U. Leuven.

Cryptosystems based on hyper-elliptic curves have similar advantages as the ones based on elliptic curves. However, until recently it was quite impossible to determine the number of points on a random hyper-elliptic curve suitable for cryptography. Thanks to the algorithm of Kedlaya it is now possible to generate secure random hyper-elliptic curves in polynomial time. In this talk we will explore the vast background of this algorithm and discuss the results obtained with our implementation.

N. Ferguson, Counterpane

Niels, one of the Twofish designers, will present the best known attacks on AES. We will also discuss recent results showing that AES has a very simple algebraic representation. This implies that AES depends on a new and untested computational complexity assumption.

R. Brent, U. Oxford.

In this talk, which describes joint work with Samuli Larvala and Paul Zimmermann, we consider the problem of testing trinomials over GF(2) for irreducibility or primitivity. In particular, we consider trinomials whose degree r is the exponent of a Mersenne prime. We describe a new algorithm for testing such trinomials. The algorithm is significantly faster than the standard algorithm, and has been used to find primitive trinomials of degree 3021277. Previously, the highest degree known was 859433.

For certain r, primitive trinomials of degree r do not exist. We show how to overcome this difficulty by finding trinomials with a primitive factor of degree r and overall degree slightly greater than r.

One of the applications is to pseudo-random number generators. Using the new primitive trinomials, we can obtain uniform random number generators with extremely long period and good statistical properties.

A. Lauder, U. Oxford

I will present some results recently obtained with Daqing Wan on counting the number of solutions to equations over finite fields, or in fancy language, computing zeta functions. I will begin by introducing the problem in simple and intuitive language, before presenting our first result: that one may efficiently count the number of solutions to any equation, or system of equations, over a finite field of small characteristic. This general result is not of much direct practical use. Next, I will described a much better algorithm for counting solutions to ``Artin-Schreier equations''. This yields a practical method of computing the order of the Jacobian of a hyperelliptic curve in characteristic 2, and as such is of interest in cryptography. The talk will not go into too much detail, except towards the end, but rather will concentrate on conveying the basic methods used.

F. Hess, U. Bristol

The divisor class group of curves over finite fields can be used to build cryptosystems based on the discrete logarithm problem. We discuss some constructive and destructive aspects of this. Among the constructive aspects we mention how to actually represent elements of the divisor class group and how to perform the group laws. The destructive aspects will focus on some ways to attack the discrete logarithm problem.

P. Howard, Vodafone

The standardised security features for 3G mobile networks will be reviewed. It will be shown how GSM security features have been enhanced for use in 3G and how new security features have been introduced to combat new threats.

Vincent Rijmen, Cryptomathic

We give an overview of the highlights of the AES selection process and discuss the main design features of the 5 finalists. Subsequently, we discuss in some more detail the design principles of Rijndael.

K. Paterson, RHUL

We consider constructive applications of the Weil and Tate pairings in cryptography. We will review the identity-based encryption scheme of Boneh and Franklin from Crypto2001. We next look at Joux's tri-partite Diffie-Hellmann key exchange protocol from ANTS in 2000. Then we construct several new cryptographic primitives based on the Weil/Tate pairing. We will not assume the audience has any specialist knowledge in number theory.

H. Ashman, U. Nottingham

In this talk we will introduce and discuss the use of non-symbolic fragmentation of data for secure communications. Non-symbolic fragmentation, or NSF, relies on breaking up data into non-symbolic fragments, which are (usually irregularly-sized) chunks whose boundaries do not necessarily coincide with the boundaries of the symbols making up the data. For example, ASCII data is broken up into fragments which may include 8-bit fragments but also include many other sized fragments. Fragments are then separated with a form of path diversity. The secrecy of the transmission relies on the secrecy of one or more of a number of things: the ordering of the fragments, the sizes of the fragments, and the use of path diversity. Once NSF is in place, it can secure many forms of communication, and is particularly useful for exchanging sensitive information, and for commercial transactions. A sample implementation is described with an evaluation of the technology.

This talk overviews work done for Amino Comunications, Cambridge UK.

S. Galbraith, RHUL

We describe the modern art of the Tate. Reference will be made to key questions such as identity and the artist, the authenticity of signed works, and more.

D. Soldera, HP Labs Bristol

We review the various methods that pad an asymmetric encryption scheme that is weakly secure into an encryption scheme that is strongly secure. Terms such as "weakly" and "strongly secure" will be explained to give the audience an intuitive understanding.

A. Kent, HP Labs Bristol

Quantum cryptography introduces distinctions between protocols that are classically counter-intuitive. I review recent work on physics-based protocols for bit commitment and related tasks, and explain why, if unconditional security is required:

- quantum bit commitment (without using relativity) is impossible
- relativistic bit commitment is possible
- non-relativistic quantum commitment of a bit string, with some bits guaranteed to remain unreadable, is also possible.

J. Malone-Lee, U. Bristol

We review standard techniques for proving the security of digital signature algorithms in the random oracle model. We show why these techniques cannot be applied to ECDSA. Finally we consider how minor modifications can produce provably secure signature schemes.

M. Sadler, HP Labs Bristol

When we design and implement secure systems and protocols we often have to deal with strongly conflicting requirements. These requirements often arise from different fundamental views about the ownership of, and rights associated with, electronic information - views that the stakeholders will often seek to protect by legal means. We look at these tensions with some illustrative areas:

- privacy and data protection: Europe and the US,
- identity: consumers and corporations,
- key management and escrow: individuals and law enforcement,
- intrusion detection and surveillance: employees and employers,
- strong cryptography and export control: businesses and spys,
- digital rights and copyright: "everyone under 25" and the record industry

J. McKee, RHUL

About thirty years ago, Shanks showed how the theory of binary quadratic forms with integer coefficients and positive (but not square) discriminants relates to the theory of integer factorisation. The most familiar (least unfamiliar?) factoring method based on this theory is his SQUFOF (SQUare FOrm Factorisation) method. This involves travelling around a cycle of forms until a `special' form is found. This talk will recall the key ideas in Shanks' method (no prior knowledge expected), and will discuss how one might attempt to travel around the cycle at greater speed.

C. Walter, Comdo

Recent attacks using differential power analysis (DPA) have shown how good equipment and poor implementation might be applied to break a single use of RSA on a smart card. The attacks are based on recognising the re-use of operands in the standard square-and-multiply, m-ary or sliding windows exponentiation schemes. A new algorithm is presented which avoids such operand re-use and consequently provides much greater resistance to DPA. It is based on generating random addition chains. Unlike the easier process of generating addition/subtraction chains (which have been applied to ECC), the algorithm does not require the computation of an inverse, and so is also applicable to RSA.

The talk will concentrate on two aspects of the algorithm, namely its efficiency and its security against side channel leakage. The former establishes performance akin to that of 4-ary exponentiation. The latter will assume the attacker can distinguish between squares and multiplies, and perhaps recognise re-use of operands. Under such attacks, it still appears to be computationally infeasible to recover the secret exponent.

S. Murphy, RHUL

One difficulty in the cryptanalysis of the AES (Advanced Encryption Standard) is the interplay between operations in the two fields GF(2^8) and GF(2). This talk outlines a new approach that avoids this conflict by showing how AES can be expressed using only simple algebraic operations in GF(2^8). One consequence is that AES encryption can be described by an extremely sparse overdetermined multivariate quadratic system over GF(2^8), whose solution would recover an AES key.

A. Charlesworth, U. Bristol

The legal battles between the Recording Industry Association of America (RIAA) and Napster, and the Motion Picture Association of America (MPAA) and 2600, based around the US Digital Millennium Copyright Act (DMCA), suggest that rightholders are willing to use a combination of DRM and the law to exercise their rights over works in the digital environment. But is this merely a justified response to the problems of managing rights on-line, or does this joint use of new technical and legal innovations hide the destruction of user rights, the stifling of technical innovation, and the elimination of competition?

S. Zaba, HP Labs Bristol

Regulations surrounding the general use of cryptographic technologies have ranged from the unremarkable, through the bureaucratic, and on to the bizarre. With illustrations and anecdotes, we will trip lightly through the history and current practice surrounding the regulation of encryption, digital signatures, and privacy rights, in the UK, US, EU, and other jurisdictions. The seminar will be light on both legal theory and conspiracy theories.

N. Canagarajah, U. Bristol

Digital images such as CCTV footage are increasingly being used as evidence in high profile court cases. However, manipulation and tampering of image and video has become very easy and we need tools to detect such manipulations. This talk will outline the use of digital watermarking for image and video authentication. In particular, we will introduce the use of fragile watermarking techniques as a means of establishing whether or not information has been manipulated. We have developed a number of techniques for image and video authentication. I will discuss some of our recent work based on a Bayesian approach for attack detection and classification which offers significant promise in this area. In particular correlation based watermark detection algorithms have been developed which allow the degradation of the watermark to be measured after different attacks. I will describe our patented double watermarking approach where the difference between the two watermark correlations is then used as the core input to a Bayesian classifier. Finally I will also describe video watermarking systems suitable for MPEG2 and MPEG4 based compression systems which can detect frame reordering, skipping and dropping.

A. Gordon, Microsoft Research Cambridge

Based on joint work with Riccardo Pucella, Cornell University Paper to appear at 2002 ACM Workshop on XML Security http://research.microsoft.com/~adg.

An XML web service is, to a first approximation, an RPC service in which requests and responses are encoded in XML as SOAP envelopes, and transported over HTTP. We consider the problem of authenticating requests and responses at the SOAP-level, rather than relying on transport-level security. We propose a security abstraction, inspired by earlier work on secure RPC, in which the methods exported by a web service are annotated with one of three security levels: none, authenticated, or both authenticated and encrypted. We model our abstraction as an object calculus with primitives for defining and calling web services. We describe the semantics of our object calculus by translating to a lower-level language with primitives for message passing and cryptography. To validate our semantics, we embed correspondence assertions that specify the correct authentication of requests and responses. By appeal to the type theory for cryptographic protocols of Gordon and Jeffrey's Cryptyc, we verify the correspondence assertions simply by typing. Finally, we describe an implementation of our semantics via custom SOAP headers.

C. Boyd, QUT

Provable security for key establishment protocols was first provided by Bellare-Rogaway in 1993 for a simple two-party protocol. Since then many more protocols have been proven secure in the same model. Recently, Bresson-Chevassut-Pointcheval have proven group Diffie-Hellman protocols secure in the Bellare-Rogaway model. One measure of the efficiency of a key establishment protocol is the number of parallel rounds in which it can complete. The protocols proven secure by Bresson et al. require a number of rounds which increases linearly with the number of protocol participants.

In this talk I will aim to achieve the following:

- explain the way that the Bellare-Rogaway model works.
- outline the achievements of Bresson et al. in their proofs.
- describe a new group key agreement protocol with proven security that completes in one round.

J. Thong, DTI

The talk will cover the following points:

- How the export control list works in respect to Category 5 Part 2 Information Security? Covering how the parts of the legislation works together. Understanding this will give people more confidence in trying to understand the rest of the control list.
- Guidance on the interpretation of the General Software Note
and Cryptography Note.
- How the 5P2 control is applied?
- How Open Source fits in or not.
- What decontrol notes are and how they are applied.

S. Blackburn, RHUL

I will survey some of the schemes that have been proposed to protect against piracy (of smartcards or decoder boxes in a Pay TV setting; of digital copies of a film) by enabling pirates to be traced (from the smartcards, decoder boxes or copies of a film they produce). I will concentrate on combinatorial schemes with a flavour of error correcting codes about them, such as IPP codes, traceability codes and frameproof codes.

J. Crampton, RHUL

Separation of duty has been an important theoretical area of research in access control for many years. However, it has rarely been implemented in commercial operating systems and applications. We will review the history of and concepts behind separation of duty constraints and role-based access control. We then introduce a simple method for specifying constraints and discuss a potential method of implementation. The implementation model is based on history-based access control techniques that have been used to implement certain kinds of access control policies for mobile code.

I. Shparlinski, Macquarie

A survey will be given of recent results on the hidden number problem introduced by Boneh and Venkatesan in 1996 and its numerous generalizations. We describe several cryptographic and computer science applications and outline some possible directions for further research.

A. Winter, U. Bristol

(joint work with A Nascimento and J Mueller-Quade)

Recently, it was shown that a noisy channel N of fixed parameters can be used to implement information theoretically secure bit commitment between two parties. (Subsequently this was extended to situations where the channel charcteristics are only partially known or under partial control of a cheating party.) In this work we demonstrate how n uses of the channel can be used to implement bit-string commitment, and indeed determine the optimal rate BC(N) for this task for every channel. In the talk the ideas of the proof will be explained, and some open questions presented.

W. van Dam, UC Berkeley, MSRI, HP Palo Alto

(joint work with Sean Hallgren and Lawrence Ip)

Homomorphic encryption concerns the situation where one party (Alice) has a secret input x, and another party (Bob) has a secret function F. Alice wants Bob to evaluate the function value F(x), but she does not want Bob to know x or F(x). Bob, on his part, is only willing to tell Alice the specific value F(x), not the general description of his function F. A solution to this problem is possible with the help of a 'homomorphic' encryption scheme E that allows Bob to calculate the encrypted answer E(F(x)), given the encrypted input E(x).

We say that an encryption scheme E is 'algebraically homomorphic' when it is possible for Bob to calculate from the encrypted numbers E(x) and E(y) the sum E(x+y) and the product E(x*y), again without discovering the secret values x and y.

In this talk I will describe an efficient quantum algorithm that breaks any algebraically homomorphic encryption scheme that uses addition and multiplication over finite fields. The best known classical algorithm for the same problem is by Boneh and Lipton, and has sub-exponential time complexity.

T. Takagi, Darmstadt

Elliptic curve cryptosystem (ECC) is well-suited for the implementation on memory constraint environments due to its small key size. However, side channel attacks (SCA) can break the secret key of ECC on such devices, if the implementation method is not carefully considered. A cryptographic algorithm on them should be efficient using small memory --- we have to make efforts to optimize the trade-off between efficiency and memory. In this talk, we present several efficient SCA-resistant scalar multiplications for ECC based on window method.

K. Paterson, RHUL

We introduce the concept of "certificateless public key cryptography" (CL-PKC). In contrast to traditional public key cryptographic systems, CL-PKC does not require the use of certificates to guarantee the authenticity of public keys. It does rely on the use of a trusted third party (TTP) who is in possession of a master key. In these respects, CL-PKC is similar to identity-based public key cryptography (ID-PKC). On the other hand, CL-PKC does not suffer from the key escrow property that seems to be inherent in ID-PKC. Thus CL-PKC can be seen as a model for the use of public key cryptography that is intermediate between traditional certificated PKC and ID-PKC.

We make concrete the concept of CL-PKC by introducing certificateless public key encryption (CL-PKE), signature and key exchange schemes. The schemes are all derived from pairings on elliptic curves. The lack of certificates and the desire to prove the schemes secure in the presence of an adversary who has access to the master key requires the careful development of new security models.

Joint work with Sattam Al-Riyami

M. Stam, Uni Bristol

(joint work with Ronald Cramer)

A black box secret sharing scheme for a threshold access structure is a linear secret sharing scheme that works over any finite abelian group. If the secret is a single group element, the number of elements in a participant's share is the expansion factor. Frankel and Desmedt introduced the problem and presented black box secret sharing schemes with expansion factor $O(n)$, where $n$ is the number of participants. Cramer and Fehr recently showed that the optimal expansion factor lies in between $\lceil \log_2 n \rceil$ and $\lfloor \log_2 n \rfloor+2$, allowing only a very small gap. We attempt to close this gap by constructing schemes that meet the aforementioned lower bound. This involves finding polynomials with certain desireable properties. For $n \le 128$ these polynomials can be found, for $n \gt 128$ there existence still seems open.

M. Bond, Uni Cambridge

For the last twenty years, computer APIs have been designed that try to enforce security policies on users and the programs they run.

First, the military developed Multi-level Secure (MLS) operating systems to enforce data flow policies on users, whilst corrupt or unwitting virus- infected users tried to break the policy and obtain secret data. APIs for communications equipment were designed to restrict their usage should they fall intro the wrong hands.

Second, in the commercial world, banks used cryptography and developed security APIs to counter the threat from bad people within their ranks trying to steal critical secrets such as customer PINs. Security APIs now govern access to many more types of critical commercial data: signing keys at certification authorities, SSL master keys and credit-dispensing keys for prepayment electricity meters and mobile phone top-up.

So far, these battlefields have been largely out of public view: military force versus spies, banks versus corrupt employees. However, the third era of security APIs will put one in every desktop PC. Initiatives such as Microsoft's Palladium, 'Digital Rights Management', and 'Trusted Computing' all promise more control over who can do what on a PC.

Competition between companies for control of your PC and its associated revenue streams is now heating up. Soon, rival companies may have to defeat each others security APIs in order to allow customers to switch. Some products will have to defeat restrictive API policies just to do their job. Users may find that software on their own computers enforces policies on them that they disagree with.

There are so many colliding interests between different legitimate corporations trying to make money and between corporations and users, that a war is brewing - and your desktop PC will be the new digital battlefield.

This talk examines the origins of security APIs in the commercial setting- corporations versus corrupt employees. It describes what has been achieved, what has gone wrong, and then speculates as to the role of security APIs on the new digital battlefield of the home PC.

G. Price, RHUL

(Joint work with Kenneth Paterson)

With the recent acceleration in research into Identity-based Public Key Cryptography (ID-PKC), we consider this an opportune moment to compare and contrast ID-PKC with more traditional Public Key Infrastructures (PKI). Because of the similarity in the nature of both approaches, we aim to identify the distinguish features of each. In doing so, we highlight the important questions to be asked when weighing up the benefits and drawbacks of each.

F. Stajano, Uni. Cambridge

Ubiquitous computing, over a decade in the making, is now no longer just a fashionable research buzzword but a revolutionary change in the way computing affects our society---a change of similar magnitude and scope as those brought about by the World Wide Web. Think of throw-away embedded computers inside shoes, drink cans and postage stamps.

Security engineers will face specific technical challenges such as how to provide the required cryptographic functionality within the smallest possible gate count and the smallest possible power budget: the chips to be embedded in postage stamps will be the size of a grain of sand and will be powered by the energy radiated by an external scanning device.

The more significant security challenges, however, will be the systemic ones. Authentication, authorization, and even concepts as fundamental as ownership require thorough rethinking. The security challenges of the architecture are much greater than those of the mechanisms. At a higher level still, even goals and policies must be revised. One question we should keep asking is simply "Security for whom?" The owner of a device, for example, is no longer necessarily the party whose interests the device will attempt to safeguard.

Ubiquitous computing is happening and will affect everyone. By itself it will never be "secure" (whatever this means) if not for the dedicated efforts of people like us. We are the ones who can make the difference. So, before focusing on the implementation details, let's have a serious look at the big picture.

M. Parker, Uni. Bergen

The challenge to construct mathematical objects which achieve spectral flatness 'everywhere' is at the forefront of many scientific disciplines, whether explicitly or implicitly. The well-known 'Littlewood Conjecture' is a challenge of this type, as is the construction of boolean functions which simultaneously satisfy a range of cryptographic criteria such as resilience and resistance to linear, differential and/or higher-order attacks. In this talk we focus primarily on the 'L_4 Norm' and related 'Aperiodic Merit Factor' as a measure of spectral flatness. We present an infinite construction based on Legendre sequences which achieves a Merit Factor > 6.34, thereby disproving a longstanding conjecture stating 6.0 as the upper bound. We then present infinite constructions for boolean functions whose multidimensional Aperiodic Merit Factor can be exactly computed via simple recursion. Potential spectral weaknesses in the S-boxes of DES and AES are also highlighted. We then present an infinite construction for multipartite quantum states whose (partial) Quantum Merit Factor can also be exactly computed via simple recursion. Moreover we show that the 'best' Quantum Error Correcting Codes achieve the highest (partial) Quantum Merit Factor. Those (graphical) quantum constructions which achieve a non-vanishing asymptotic Merit Factor also indicate the possibility of quantum macro-structures with non-vanishing Quantum Entanglement.

D. Page, Uni. Bristol

The quest for high performance software has resulted in languages such as C omitting strict sematic checks on pointer use. This fact has been harnessed by malign users to attack computer systems that run such software. Most popularly characterised by the infamous "buffer overflow", the general method of corrupting or altering intra-program pointer values produces a large number of security vulerabilities that allow a remote attacker to gain privaleged access to a target device. Indeed, such attacks currently account for around a third to a half of all reported security risks.

In this talk we overview a range of pointer manipulation attacks and describe some of the measures, in both hardware and software, used to combat their use in the wild.

L. Chen, HP Labs Bristol

This talk will give an introduction to a signature scheme called DAA (direct anonymous attestation) which is concerned with both security and privacy issues. A DAA signature convinces a verifier that the corresponding message was signed by a qualified signer. It does this without revealing the identity of the signer. The scheme provides a variety of balances between security and privacy. Users are allowed to choose whether or not a particular verifier is able to link different signatures from the same signer for this verifier. The scheme has a security proof in the random oracle model based on the strong RSA assumption and the decision Diffie-Hellman assumption.

The scheme was designed for the Trusted Computing Group (TCG), formerly known as the Trusted Computing Platform Alliance (TCPA). Each TCG platform has a Trusted Platform Module (TPM). The TPM is a tamper-resistant piece of hardware. The scheme offers assurance to an external partner that an attestation came from a genuine TPM without identifying the TPM. The scheme has been used in TCG TPM specification version 1.2. This is available at https://www.trustedcomputinggroup.org.

The content of the talk is joint work of Ernie Brickell and Jan Camenisch.

T. Kindberg, HP Labs Bristol

A key problem in mobile and ubiquitous computing is that of setting up associations between pairs (or larger groups) of devices so that they may communicate securely over a wireless network. For example, suppose I send a photograph over a local wireless connection from my phone or PDA to a printer when visiting my friend's house. How can I conveniently ensure that the neighbours cannot see it? It is particularly important to be able to solve this problem for spontaneous associations, which must not depend on preexisting security values such as certificates, and when the only reliable means of identifying the target device is physical. This talk concerns protocols for constructing and validating secure spontaneous associations. I shall present our first protocols, which were predicated on special hardware, and more recent work to eliminate that requirement. The problem description and much of the techniques should be broadly understandable by a general audience, but the talk will contain technical material.

A. Dent, RHUL

Hyrbid encryption has been used for many years, in many different products to send long messages via public key encryption algorithms. Whilst this simple technique is very intuitive, it is not without a certain level of subtlty and requires thought to be implemented correctly. It is also has famously messy security proofs!

This talk will explore some of the ideas behind hybrid encryption, talk about Shoup's KEM-DEM construction for hybrid encryption schemes and suggest some open avenues for research.

A. Clark, Inforenze

So called "High-tech" crime is becoming increasingly prevalent in today's technology rich society. Information systems and their content are frequently included in the evidential capture and analysis phase of the investigation of crime. These systems' architectures are changing from small clusters of stand-alone and networked machines to web-based schemes. It is becoming increasingly difficult to find the "Smoking Gun" in these systems and we have to cast our investigative net further than before.

This talk will discuss the typical forensic processes used in the investigation of computer assisted crime and the challenges that the forensic investigators face.

S. Moore, Uni. Cambridge

Consumer security devices like smartcards (e.g. for ID cards) may have their secret key information deduced using invasive and noninvasive techniques. Invasive techniques include reverse engineering, circuit modification and physical probing. These techniques are typically a prelude to more rapid noninvasive attacks which are a bigger worry. Noninvasive attacks include power analysis, electromagnetic analysis (EMA) and laser fault induction.

The talk will review work being undertaken in Cambridge to build more secure and yet inexpensive hardware security devices. Design time security evaluation is at the core of our approach since these security metrics allow us to rapidly explore a wide range of architectural and circuit techniques.

C. Kudla, RHUL

This talk will introduce the concept of Concurrent Signatures and concurrent signature protocols. These allow two entities A and B to sign (possibly different) messages M_A and M_B in such a way that, from the point of view of any third party, both signatures are ambiguous with respect to the identity of the signing party until an extra piece of information (the keystone) is released by one of the parties. Upon release of the keystone, all ambiguity is removed, and both signatures become binding to their true signers concurrently.

Concurrent signatures fall just short of providing a full solution to the problem of fair exchange of signatures, but we discuss some applications in which concurrent signatures suffice. In contrast to all previous approaches, our ``near solution'' using concurrent signatures is highly efficient and requires neither a trusted arbitrator nor a high degree of interaction between the parties. We also provide a formal model of security for a concurrent signature scheme, and give a concrete, computationally efficient example of such a signature scheme which is provably secure in our model.

The content of this talk is joint work with Liqun Chen and Kenneth Paterson, and will appear in Eurocrypt 2004.

H. Shacham, U. Stanford

Joint work with Dan Boneh and Xavier Boyen

Group signatures, introduced by Chaum and van Heyst, provide anonymity for signers. Any member of the group can sign messages, but the resulting signature keeps the identity of the signer secret. In some systems there is a third party that can undo the signature anonymity (trace) using a special trapdoor.

Within the last year, three developments have given a new life to group signatures: a new, simple security model; real-world applications; and new short and efficient constructions.

The new security model, introduced by Bellare et al. at Eurocrypt 2003, reduces group signature security to three properties: correctness, full-anonymity, and full-traceability.

New applications for group signatures include the trusted computing initiative (TCPA) and vehicle safety ad-hoc networks (DSRC). In each case, group signatures provide privacy guarantees for tamper-resistant embedded devices.

We describe a new short group signature scheme. Signatures in our scheme are approximately the size of a standard RSA signature with the same security. Security of our group signature (in the random oracle model) is based on the Strong Diffie-Hellman assumption recently used by Boneh and Boyen. We also require the Decision Linear assumption in bilinear groups. We prove security of our system using the security definition for group signatures recently given by Bellare et al.

S. Fehr, CWI Amsterdam

We propose a new adaptively secure version of Feldman's famous (statically secure) VSS scheme. Although Feldman had already addressed adaptive security in the original paper, our solution is advantageous in several ways. Based on our adaptively secure Feldman VSS scheme, we construct the first discrete-log key generation protocol (in the non-erasure model) which is adaptively secure and for which the simulator does not need to rewind the adversary. Due to its non-rewinding feature, it can be proven secure in (a slightly relaxed variant of) Canetti's Universal-Composability framework. As an application, we get (fully) universally-composable threshold Schnorr and DSS signature schemes. These are the first threshold-signature schemes whose security can be proven secure in the Universal-Composability framework.

N. Smart, U. Bristol

Joint work with S.S. Al-Riyami and J. Malone-Lee

Cryptographic workflow is a term coined by Paterson for the use of ID based cryptography in a system where the recipient encrypts to keys whose private component may not yet exist. The idea is that the recipient has to then engage in a ``workflow'' to obtain the keys by performing some acts, or showing posession of data, for the relevant authorization authorities. The basic idea having been introduced by Chen, Harrison, Soldera and Smart (later generalized by Smart) and a related concept having been introduced by Li, Du and Boneh in the context of trust negotiation for PKI's.

In this talk we present a full security model for such schemes for arbitrary policies and present a provably secure scheme within the model. In addition our scheme is escrow free, by which we mean that the authorization authorities can determine no information about the actual message being encrypted, a property not holding for prior work in this area.

S.M.Furnell, University of Plymouth

The presentation will discuss the potential of user authentication using the behavioural biometric of keystroke dynamics. This technique has the potential to offer a non-intrusive measure, which can operate transparently in conjunction with the user's normal activities. This not only offers the opportunity to strengthen authentication beyond the traditional password or PIN-based methods, but also to enable it to go beyond a one-off judgement at the start of a user's session.

The talk will explain the principles of user authentication based upon keystroke dynamics, and discuss approaches to the work that have been taken at the University of Plymouth, considering application on both desktop PCs and mobile phone handsets. Particular focus will be given to a long-term user trial in which the effectiveness of the technique was determined from over 5 million keystroke samples, collected from 35 participants (with an optimum false acceptance rate of 4.9% being observed at a rate of 0% false rejection).

C. Cid, RHUL

Since been officially selected as the Advanced Encryption Standard (AES) in 2000, Rijndael has continued to receive great attention and had its security continuously evaluated by the cryptographic community. The cipher has a simple and elegant structure, and given its careful design criteria, it seems unlikely that its security can be affected by conventional methods of cryptanalysis.

On the other hand, Rijndael has also a highly algebraic structure: the cipher round transformations are based on simple operations on the finite field GF(256). Its selection as the AES has led to a growing interest in the study of algebraic aspects of block ciphers and related analytic techniques. These techniques are algebraic rather than statistical, and involve areas such as computational algebraic geometry, commutative algebra, computer algebra. We will consider a number of algebraic aspects of the AES, and discuss some of the computational and algebraic techniques that can be used in the analysis of the cipher. In particular, we will consider the representation of the cipher as a large, though surprisingly simple, system of multivariate quadratic equations. We examine some of the approaches that have been proposed for solving such system, and discuss a few experimental results on small versions of the cipher.

K. Martin, RHUL

Cryptographic key assignment schemes can be used to model information flow policies that control access to sensitive data. In such a scheme each user is assigned some secret information and a secret key. They can then use their secret infomation to deduce the secret keys of any other user "beneath" them with respect to the access control policy.

We will review a number of proposed cryptographic key assignment schemes, both for partially ordered hierarchies and more general directed graphs. We will look at the factors that determine a "good" key assignment scheme and discuss some open questions regarding optimisation of these schemes.

W. Mao, HP Labs

We present a stateful signcryption scheme which is the first deterministic public-key cryptographic system to have achieved semantic and stronger security properties. Our scheme, having avoided using a random input, is suitable for low-end (such as handheld or smartcard-based) devices which lack quality entropy for seeding generation of secure random numbers.

K. Morozov, Uni Aarhus

Joint work with C. Crepeau and S. Wolf

Oblivious transfer (OT) is a two-party cryptographic primitive of central importance, in particular in two- and multi-party computation. We show that if noise (which is inherently present in any physical communication channel) is taken into account, then OT can be realised in an unconditionally secure way for both parties, i.e., even against dishonest players with unlimited computing power. We give the exact condition under which a general discrete memoryless noisy channel allows for realising OT and show that only ``trivial'' channels, for which OT is obviously impossible to achieve, have to be excluded.

A. Nascimento, U. Tokyo

To all of you who think noise is a bad thing!

We will give an introduction to crypto protocols based on noisy channels. Basic results dealing with secret key agreement, commitment schemes and oblivious transfer will be reviewed, some open problems stated and future research directions outlined.

The talk is intended to a general audience.

P. Tuyls, Philips Research

Based on semantically secure encryption, we introduce robust, fully private and secure biometric key distillation and verification. Our model incorporates an adversary with side information who has access to a database with reference information. Even though our schemes are based on a master key, no master key needs to be stored in biometric sensors. In our scheme it is possible to derive a polynomial number of keys from a single biometric and we show how to renew keys in a secure and private way without additional interaction with the user. Previous work considers unconditional secure key distillation which can at most reach partial privacy and which can only lead to a small number of keys for each biometric.

J. Liu, Chinese University of Hong Kong

We introduce a new notion called Spontaneous Anonymous Group (SAG) Cryptography. It allows one to spontaneously form a group of users without any collaboration from other parties. It does not require any Trusted Third Party (TTP). Another great advantage is the setup-freeness property. Without any over-powered manager and setup stage, the spontaneous formation of a group is essentially done by specifying a list of public keys associated with the identities of group members. Ring Signature is a kind of cryptographic protocols in SAG cryptography. Our research focuses on SAG cryptography. We have proposed various ring signature schemes with additional properties, such as separability and linkability. Besides, a new verifiable encryption for a group of uses is proposed. For each of these protocols, we further suggest some practical applications or scenario which may require these protocols to facilitate.

N. Zhang, Uni Manchester

In this talk, I am going to introduce the ideas and outcomes of some of the security research projects undertaken in the School of Computer Science, the University of Manchester. These projects include the Fair Integrated Data Exchange Services (FIDES) project aimed at designing fair exchange and non-repudiation security properties to support secure e-commerce transactions, the mobile agent security project focusing on the design of proxy signature delegation methods, and the FAME-Permis (Flexible Access Middleware Extensions to Permis) project aimed at achieving authentication assurance level linked fine-grained access control in Grid environment.

K. O'Brien, Uni Bristol.

The ideas of generalised Hamming weights and the weight hierarchy of an error correcting code are motivated by applications in cryptography. The performance of a linear code on the type II wire-tap channel can be completely characterised by its weight hierarchy. Due to the relationship between generalised Hamming weights and the trellis complexity of a code, these concepts are important in coding theory for decoding purposes. We consider generalised weights for lexicographic codes. These codes are generated by applying a greedy algorithm to a lexicographically ordered vector space.

C. Diem, Uni Essen.

The purpose of the talk is to present the following heuristic result.

Let $a, b \in \mathbb{R}$ with $0 \lt a \lt b$. Then there exists a randomized algorithm which computes discrete logarithms in $E(\mathbb{F}_{q^n})$, where $q$ is a prime power, $a \log_2(q) \leq n \leq b \log_2(q)$ and $E/\mathbb{F}_{q^n}$ is any elliptic curve over $\mathbb{F}_{q^n}$, in a subexponential time of $L[\frac{3}{4}]$.

The algorithm is a variant of a recent new index calculus algorithm by Gaudry. The main difference is that we increase the factor base.

P. Grabher, Graz and Bristol

Although identity based cryptography offers many functional advantages over conventional public key alternatives, the computational costs are significantly greater. The main computational task is the computation of a bilinear map, or pairing, over elliptic curves. We present a hardware accelerator for the Tate pairing over fields of characteristic three. We disregard the mathematical background of the pairing, and concentrate on the hardware aspects of our implementation. We discuss how FPGAs can be used for developing our own IP modules and we give a full description of our hardware architecture. Finally we compare the polynomial basis and the normal basis implementation in terms of performance and size.

M. Ryan, U. Birmingham

Electronic voting promises the possibility of a convenient, efficient and secure facility for recording and tallying votes in an election. Recently highlighted inadequacies of implemented systems have demonstrated the importance of formally verifying the underlying voting protocols. The applied pi calculus is a formalism for modelling such protocols, and allows us to verify properties by using automatic tools, and to rely on manual proof techniques for cases that automatic tools are unable to handle. We model a known protocol for elections known as FOO~92 in the applied pi calculus, and we formalise three of its expected properties, namely fairness, eligibility, and privacy. We use the ProVerif tool to prove that the first two properties are satisfied. In the case of the third property, ProVerif is unable to prove it directly, because its ability to prove observational equivalence between processes is not complete. We provide a manual proof of the required equivalence.

M. Benaissa, U. Sheffield

This seminar presentation addresses the hardware implementation issues pertinent to GF(2^m) arithmetic for cryptography both for FPGA and ASIC. Results of an investigation into design for flexibility in cryptography are also presented. Finally, recommendations for possible useful extensions and alternative avenues are discussed in the context of arising technological and security challenges.

P. Ryan, U. Newcastle

In this talk I will describe the Pret a Voter voter-verifiable scheme. This scheme, based on an earlier scheme due to Chaum, has the surprising property that voters can confirm that their vote is accurately included in the tally, whilst at the same time preserving ballot secrecy.

I then describe some extensions such as the use of re-encryption mixes in place of the decryption mixes and adaptations to the remote voting context. I will then discuss some of the weak spots in the current scheme, including:

- The need to trust ???The Authority??? that generated the ballot forms to keep to the ballot form information secret.
- Vulnerability to chain-voting attacks
- The need to ensure the destruction of the LH strips (candidate list) after the voter selection has been made.

J. McKee, RHUL

This talk presents joint work with Steven Galbraith.

The Weil and Tate pairings are defined for elliptic curves over fields, including finite fields. The definitions extend to elliptic curves over finite commutative rings, such as Z / NZ . We explore some obstacles in adapting the main ideas in pairing-based cryptography to elliptic curves over such rings. In particular, we show that an oracle that computes reduced Tate pairings over Z / NZ can be used to factorise integers.

C. Mitchell, RHUL

The impact of recently proposed padding oracle attacks and other related attacks on CBC mode encryption is considered. For applications where unauthenticated encryption is required, the use of CBC mode is compared with its major rival, namely the stream cipher. It is argued that, where possible, authenticated encryption should be used, and, where this is not possible, a stream cipher would appear to be a superior choice. This raises a major question mark over the future use of CBC, except as part of a more complex mode designed to provide authenticated encryption.

D. Hofheinz, CWI

The idea of simulatable security is introduced, with a view towards protocol composition operations. In particular, a new result is exhibited which states that the standard flavour of simulatable security in the framework of Reactive Simulatability does *not* allow for secure concurrent composition of protocols.

A. Dent, RHUL

Plaintext awareness is a technique used to prove the security of public-key encryption schemes. Loosely speaking, it means that it is impossible for an attacker to produce a valid ciphertext for which he does not know the corresponding decryption. This means that a decryption oracle for a scheme that is plaintext aware is of no help to an attacker, because the ciphertexts that the attacker submits to that oracle are either invalid or the attacker already knows the decryption of the ciphertext. However, the first known formalisation of this technique (by Bellare, Desai, Pointcheval and Rogaway) could only guarantee security in the random oracle model. A second formulation (by Bellare and Palacio) appeared in Asiacrypt 2004, but the authors could not prove that any scheme met the level of plaintext awareness necessary to imply IND-CCA2 security.

This talk will review the Bellare-Palacion notion of plaintext awareness and demostrate that it is possible for a scheme (in this case, the Cramer-Shoup scheme) to meet the level of plaintext awareness that implies IND-CCA2 security. While this is not a practical result, the Cramer-Shoup scheme being famously secure in the standard model anyway, it does demonstrate that it may be possible to produce standard model proofs using this new technique.

D. Catalano, ENS Paris

(Non-interactive) *Trapdoor Mercurial Commitments* (TMC s) were
introduced by Chase et al. [CHL05] and form a key building block
for constructing zero-knowledge sets (introduced by Micali, Rabin and
Kilian [MRK03]). TMCs are quite similar and certainly imply
ordinary (non-interactive) *trapdoor commitments* (TC s).

Unlike TC s, however, they allow for some additional freedom in the
way the message is opened: informally, by allowing one to claim that
``if this commitment can be opened at all, then it would open to this
message''. Prior to this work, it was not clear if this addition is
critical or not, since all the constructions of TMCs presented in
[CHL05] and [MRK03] used strictly stronger assumptions than
TCs. We give an affirmative answer to this question, by providing
simple constructions of TMCs from any trapdoor bit commitment
scheme. Moreover, by plugging in various trapdoor bit commitment
schemes, we get, in the trusted parameters (TP) model, all the
efficient constructions from [MRK03] and [CHL05], as well as
several immediate new (either generic or efficient) constructions. In
particular, we get a construction of TMCs *from any one-way
function* in the TP model, and, by using a special flavor of TC s,
called * hybrid* TCs [CV05], even in the (weaker) shared
random string (SRS) model.

Our results imply that

- mercurial commitments can be viewed as surprisingly simple variations of trapdoor commitments
- the existence of non-interactive zero-knowledge sets is equivalent to the existence of collision-resistant hash functions.

W. Mao, HP Labs China

Grid computing is a new direction to advance distributed computing featuring large scale, dynamic and service oriented sharing of resource. Grid security is a key component in grid computing. However, in all grid architectures which mainly follow Globus, a grid middleware standard, grid security is considered as plain and conventional applications of the standard PKI authentication framework and makes little impact on enhancing the grid feature of advanced resource sharing.

A useful security service from Trusted Computing Group (TCG) is behaviour conformation which can confine the users including the owner of a computing platform to behaviour desired by a remote user. This security service is practically achievable because a platform has a tamper protection hardware module Trusted Platform Module (TPM) to set the desired behaviour of the platform users. We consider that a TPM, even sparsely deployed in an initial stage, can be a shared security resource to enable strong grid security solutions. In this talk I will report the work of Project Taonity to advance grid security with TPM as shared security resource. This work leads a grid security standard track under the grid standard organisation Global Grid Forum.

D. Chadwick, Univerisity of Kent

This talk will describe the PERMIS authorisation system, which has implemented a role based access control system, using an application independent policy decision point (PDP) and an X.509 credential validation service (CVS). Both components are driven by XML policies that specify the trust relationships between the resource owner and the attribute authorities, and the rules for which roles are required to perform which actions. The system also includes a Delegation Service that allows role holders to dynamically issue attribute certificates to other users in accordance with a delegation policy. Because of the flexibility of the design, the same CVS and PDP can be used for the delegation service, as is used for protecting grid resources, Shibboleth resources and in fact any type of resource.

M.-J. O. Saarinen, RHUL

After a brief introduction to hash function concepts and basic collision-finding techniques, I will discuss attacks against the recently circulated preliminary version of the "Bristol" LASH hash function. My best attack has O(2^(n/4)) complexity, compared to the standard O(2^(n/2)) birthday attack. I will also discuss its relationship to the spectacular 2005 breaks of MD5 and SHA-1 (Xiaoyun Wang, Hongbo Yu, and Yiqun Lisa Yin) and what conclusions can be drawn from these.

M. Cheng, U. Middlesex

This talk will give a snapshot of identity-based two-party key agreements from parings. Since 2000, using bilinear pairing of elliptic curves in cryptography has been an intensively studied area. As an important primitive, many identity-based key agreements were proposed. In this talk, a few two-party key agreements including Smart, CK, MB, Wang's protocol and some latest proposals will be discussed.

S. Wehner, CWI

Non-local boxes are hypothetical ``machines'' that give rise to superstrong non-local correlations, leading to a stronger violation of Bell/CHSH inequalities than is possible within the framework of quantum mechanics. We show how non-local boxes can be used to perform any two-party secure computation. We first construct a protocol for bit commitment and then show how to achieve oblivious transfer using non-local boxes. Both have been shown to be impossible using quantum mechanics alone.

Joint work with H. Buhrman, M. Christandl, F. Unger and A. Winter.

D.H. Phan, UCL

In Eurocrypt 2005, with Chabanne and Pointcheval, we introduced an property for traitor tracing schemes, called *public traceability*, which effectively makes tracing a public operation allowing
anyone with access to public key to execute the tracing
algorithm and identify a traitor. Our proposed scheme works well for two users. However, the multi-user system achieves only local public traceability.

In this talk, we give a solution to this problem
by proposing a * generic construction for a hybrid traitor tracing
scheme achieving full-public-traceability*.
The scheme has blackbox tracing and *always* correctly identifies
a traitor. Security of the scheme is in related CCA model which will
be shown to be a natural choice in multi-receiver case.
The hybrid structure makes the system efficient:
the ciphertext length is the same as plaintext length plus a header and computation efficiency is effectively the same as the symmetric
key encryption system used for the encryption of content. Headers in ciphertexts
can be made logarithmic to the size of the user group by using codes having identifying parent property which preserves the important property of full-public-traceability.

T. Shrimpton, Portland State University

We will take a closer look at one of cryptography's least flashy, yet most widely used objects, the hash function. The cryptographic community has been rocked over the last two years by a series of attacks on MD5 (broken), SHA-0 (broken), SHA-1 (teetering) and others. Now seems like a good time to revisit exactly what are these objects, what they are supposed to do for us, and how we go about constructing them. We will do this, giving special attention to blockcipher-based hash functions, their security models and performance.

E. Kiltz, CWI

One of the celebrated applications of Identity-Based Encryption (IBE) is the Canetti, Halevi, and Katz (CHK) transformation from any (selective-identity secure) IBE scheme into a full chosen-ciphertext secure encryption scheme. Since such IBE schemes in the standard model are known from previous work this immediately provides new chosen-ciphertext secure encryption schemes in the standard model. This paper revisits the notion of Tag-Based Encryption (TBE) and provides security definitions for the selective-tag case. Even though TBE schemes belong to a more general class of cryptographic schemes than IBE, we observe that (selective-tag secure) TBE is a sufficient primitive for the CHK transformation and therefore implies chosen-ciphertext secure encryption.

We exemplify the usefulness of our TBE approach by constructing a TBE scheme based on the Hashed Diffie-Hellman assumption. In contrast to all known IBE schemes our TBE construction does not deploy pairings. The resulting scheme, when viewed as a key-encapsulation mechanism, leads to the most efficient chosen-ciphertext secure encapsulation scheme in the standard model.

C. Paar, Ruhr Univ Bochum

For many year, the cryptographic engineering community had worked on the problem of implementing symmetric and asymmetric ciphers as fast as possible. Typical problems were RSA accelerators or high-speed DES engines. However, the advent of ubiquitous computing has led to many pervasive devices which are extremely cost-constrained. An extreme example are RFID tags but there are numerous other pervasive application -- ranging from automotive parts over sensor networks to consumer products such as computer gadgets -- which are extremely cost sensitive and for which cryptographic solutions have to provided with an optimized cost-performance ratio.

In this talk, I will present our research over the last few years in the area of light-weight cryptography. For asymmetric solutions, efficient hardware implementations of elliptic curve cryptosystems will be presented. It will be shown that a small Instruction Set Extension can enable an one-euro, 8-bit microcontroller to perform full-size ECC operations at very competitive performance levels. We will also describe a tiny ECC hardware engine which can provide elliptic curves with as little as 10,000 gates. In the symmetric case, I will present the smallest known DES hardware implementation. Our light-weight DES variant, DESL, further reduces the area requirements to 1800 gates. For higher security levels, both implementations can be equipped with key whitening at little extra costs.

G. Neven, K.U. Leuven

Nearly all modern hash functions are constructed by iterating a compression function. These compression functions are assumed to be secure relative to some security notion, like collision resistance, and we say the notion is preserved by the construction if the iterated hash is also secure relative to the notion. At FSE’04, Rogaway and Shrimpton formalized seven security notions for hash functions: collision resistance (Coll), three variants of second-preimage resistance (Sec, aSec, eSec), and three variants of preimage resistance (Pre, aPre, ePre).

After investigation of a large number of existing constructions (including the widely-used strengthened Merkle-Damgard construction), we found that none of them preserves all seven notions. In particular, all fail to preserve (at least) aSec and aPre. We propose a new iterated hash function, the Randomized Mask-then-Compress (RMC) construction, and prove that it is the first to preserve all of the notions. It does this using a salt, and (quite controversially) a small-input random oracle that is called a logarithmic number of times in the number of message blocks.

S. Kunz-Jacques, ENS Paris

MQV is a well-known authenticated key-exchange algorithm which is now a de facto standard. Its lack of security proof has been subject to controversy, especially after a proven variant of MQV, HMQV, was presented at Crypto'05 by H. Krawczyk. After a presentation of the Canetti-Krawczyk security model, we will show that a protocol closer to MQV than HMQV can be proven under reasonable hypotheses; however the algorithmic assumption underlying the proof has to be tailored to MQV. We will also review the security implications of ephemeral data leakage for both MQV and HMQV, and present new variants of MQV that have better properties in this area, which can ease actual implementations of these protocols. The results exposed were originally presented at SCN'06.

H. Lipmaa, UCL

The primary task of data-mining is to develop models about aggregated data, for example about the habits of Internet users, about loyal customers, etc. The main question of privacy-preserving data-mining (PPDM) is, can we develop accurate models without access to precise information in individual data records? The latter question has proven to be difficult to solve. One can divide the techniques of PPDM in two large groups: the randomization approach (related to the well-known statistical Randomized Response Technique), and the cryptographic approach.

In this tutorial, we will give an overview of the cryptographic approach that has many benefits but also several drawbacks. Most importantly, data mining tasks routinely deal with huge amounts of data. Mining this data, even without any privacy concerns, is non-trivial. Cryptography, even on much smaller amounts of data, is also non-trivial. Mixing the two usually results in PPDM solutions that are very complicated, and --- most importantly --- unlikely to be practical. Given this, it is important, in a collaboration between two communities, to come up with security definitions for PPDM that potentially have efficient cryptographic implementations.

D. Bernstein, U. Illinois at Chicago

Variants of the Rabin-Williams public-key signature system have, for twenty-five years, held the speed records for signature verification. Are these systems secure?

There are many other signature systems of RSA/Rabin type. One can break each system by factoring the signer's public key $n$ or by breaking the system's hash function $H$. Are there other attacks? This is not an idle concern: some RSA-type systems have been broken by embarrassing attacks that (1) are much faster than known methods to factor $n$ and (2) work for every function $H$ (or a large fraction of choices of $H$), given oracle access to $H$.

Some systems have been proven immune to embarrassing attacks. For example, in the 1993 paper that popularized this line of work (along with the terminology ``secure in the random-oracle model,'' which seems to have engendered considerable confusion), Bellare and Rogaway proved the following security property for the traditional ``FDH'' form of exponent-$e$ RSA: every $H$-generic attack on RSA-FDH can be converted (without serious loss of efficiency) into an algorithm to compute $e$th roots modulo $n$.

Unfortunately, a closer look reveals that most of these proofs merely limit the embarrassment, without actually ruling it out. For example, the Bellare-Rogaway root-finding algorithm has only a $1/h$ chance of success, where $h$ is the number of hash values seen by the FDH attack. Coron introduced a better algorithm having a $1/s$ chance of success, where $s$ is the number of signatures seen by the FDH attack; but $s$ can still be quite large.

Randomized signatures, in which long random strings are prepended to messages before the messages are signed, allow much tighter proofs. For example, every $H$-generic attack on randomized exponent-$e$ RSA (or Rabin's 1979 signature system) can be converted into a algorithm to compute $e$th roots modulo $n$ (or to factor $n$) with a good chance of success. But generating random strings takes time, and transmitting the strings consumes bandwidth. Can we do better?

A 2002 theorem of Coron is widely interpreted as saying that FDH is stuck at $1/s$: tight proofs require randomization. The randomized ``RSA-PSS'' and ``PRab'' systems have tight security proofs and work around the bandwidth problem, but they still take time to generate long random strings, and they are incompatible with the fastest known verification techniques. A tight security proof by Katz and Wang allows much shorter random strings for some RSA variants but breaks down for Rabin-Williams.

In my talk I'll explain three state-of-the-art variants of the Rabin-Williams public-key signature system. All three variants have tight security proofs, and all three provide extremely fast signature verification. What's most surprising is the FDH variant, which has a tight security proof despite hashing unrandomized messages; in the Rabin-Williams context, a minor technical assumption in Coron's theorem turns out to be a major loophole.

O. Obi, U. Sussex

The talk will begin with a description of the Paillier cryptosystem and its attractions, and then, move on to the generalisations developed by Catalano et.al, and Damgard and Jurik.

We then present a generalised Paillier cryptosystem with a flexible randomiser exponent. We show that this extension retains the very desirable homomorphic property which was lost in the Catalano et.al, generalisation. This we achieve by employing a generator which is more natural than that used in other works. Also a decryption expression which extends that introduced by Paillier is presented.

This is joint work with Falah Ali and Elias Stipidis

J. Wullschleger, ETH Zurich

Oblivious transfer (OT) is an important primitive in cryptography or, more precisely, two- and multi-party computation due to its universality. Unfortunately, OT cannot be achieved in an unconditionally secure way for both parties from scratch. Therefore, it is a natural question what information-theoretic primitives or computational assumptions OT can be based on.

First, we show how to realize unconditionally secure OT from a weak variant of OT called universal OT, for which a malicious receiver can virtually obtain any possible information he wants, as long as he does not get all the information. Our result, that improves the efficiency of the reduction by a factor of two, is based on a novel distributed leftover hash-lemma which is of independent interest. We then carry over our results to the quantum setting and show how they can be applied in the bounded quantum storage model.

Second, we give conditions for when OT can be obtained from a faulty variant of OT called weak OT, for which it can occur that any of the parties obtains too much information, or the result is incorrect. These bounds and protocols, which correct on previous results by Damgard et. al., are of central interest since in most known realizations of OT from weak primitives, such as noisy channels, a weak OT is constructed first. Here, we carry over our results to the computational setting and show how a computationally secure weak OT can be strengthened.

U. Aickelin and J. Twycross, U. Nottingham

Artificial Immune Systems (AIS) are a collection of algorithms inspired by aspects of the human immune system. The arguably most obvious application of AIS was to detect intrusions in computer networks. If we can fight viruses with our immune systems, then surely we can fight computer viruses with a computer immune system? Early AIS approaches used a technique called Negative Selection, which did not provide a sufficient level of protection against intrusions. Problems were found with scalability and large numbers of false positives (false alarms) were generated. In 2003, Aickelin and his colleagues proposed that the reason for the poor performance of the AIS is that Negative Selection is based on outdated concepts in immunology. It was proposed that the incorporation of the Danger Theory, a modern principle of immunology, could improve the performance of AIS when applied to intrusion detection. The result of the proposal is the Danger Project - an interdisciplinary collaboration between computer security experts, computer scientists, and 'wet-lab' immunologists. Recently, Uwe has investigated how OR models such as set covering problems can help direct future research. A summary of the above is the focus of this part of the seminar.

Jamie will then present a novel biologically-inspired algorithm called tlr which has been developed as part of the Danger Project. tlr is based on aspects of the interaction between the biological innate and adaptive immune systems. It models two types of immune system cells, dendritic cells and T cells, and is implemented as a multiagent system. When applied to a computer security problem, process anomaly detection, tlr is able to detect anomalies in application behaviour which a high true positive and low false positive rate. It shows how the incorporation of innate immune system mechanisms with traditional artificial immune system algorithms can result in significant performance increases.

K. Paterson, RHUL

We describe new attacks which break any RFC-compliant implementation of IPsec making use of encryption-only ESP in tunnel mode. The new attacks are both efficient and realistic: they are ciphertext-only and need only the capability to eavesdrop on ESP-encrypted traffic and to inject traffic into the network. We report on our experiences in applying the attacks to a variety of implementations of IPsec.

Joint work with Jean Paul Degabriele (currently with Hewlett-Packard Labs).

T. Ristenpart, UCSD

Cryptographic hash functions are a fundamental primitive, utilized in numerous security applications. With the recent attacks against many widely deployed hash functions (e.g., MD5, SHA-1), the search is on for new, secure hash functions to re-establish security of all the various protocols that rely on them. At the same time, hash functions like SHA-1 have seen expanded usage into applications that demand security properties not handled by traditional design approaches. In this talk, I'll discuss the goal of building a single hash function simultaneously secure for multiple applications, the challenges and pitfalls of this goal, and two new approaches we've recently proposed for building hash functions secure in a broad sense.

(Joint work with Mihir Bellare, Thomas Shrimpton)

D. Hofheinz, CWI

We put forward a new paradigm for building hybrid encryption schemes from \term{constrained chosen-ciphertext secure} (CCCA) key-encapsulation mechanisms (KEMs) plus authenticated symmetric encryption. Constrained chosen-ciphertext security is a new security notion for KEMs that we propose. It has less demanding security requirements than standard CCCA security (since it requires the adversary to have a certain plaintext-knowledge when making a decapsulation query) yet we can prove that it is CCCA sufficient for secure hybrid encryption.

Our notion is not only useful to express the Kurosawa-Desmedt public-key encryption scheme and its generalizations to hash-proof systems in an abstract KEM/DEM security framework. It also has a very constructive appeal, which we demonstrate with a new encryption scheme whose security relies on a class of intractability assumptions that we show (in the generic group model) strictly weaker than the Decision Diffie-Hellman (DDH) assumption. This appears to be the first practical public-key encryption scheme in the literature from an algebraic assumption strictly weaker than DDH.

(Joint work with Eike Kiltz)

N. Courtois, UCL

In this talk we will overview algebraic attacks on LFSR-based stream ciphers give several different formulations of the attack and explain which is the most powerful. Notions of algebraic immunity for Boolean functions will be discussed. Finally, we will explain some more recent algebraic attacks.

L. Nguyen, Oxford

There has recently been a lot of research in the area of Pervasive Computing. One of the main challenges is how we can establish secure communication over the un-trusted high-bandwidth network (the Internet) without any initial knowledge or a Public Key Infrastructure. An approach studied by a number of researchers is to build security though human work creating a low-bandwidth empirical (or authentication) channel where the transmitted information is authentic and cannot be faked or modified. An example is conversation between the users of systems. In this talk, we give a brief analytical survey of authentication protocols of this type as well as concentrating on our contribution to this area. These are our proposed group protocols and a new cryptographic primitive termed a Digest function that uniformly and efficiently digests/compresses many kilobytes of information into a short string output (perhaps 16 bits).

We start with non-interactive schemes, for example: the one proposed by Gehrmann, Mitchell and Nyberg, and point out that the scheme does not optimise the human work because of the short length of the key inputted into the Digest function that also leads to an off-line attack. We then move on to analyse a number of strategies used to build interactive pair-wise and group protocols that minimise the human work relative to the amount of security obtained as well as optimising the computation processing. Many of the protocols are based on the human comparison of a single short authentication string, transmitted over the empirical channel that is the output of the Digest function. We finish the talk with our suggested implementation of the Digest function, based on matrix multiplication and pseudo-random number generation, as well as some theoretical results about the digest.

J. Wullschelger, U. Bristol

As shown by Mayers, and Lo and Chau, even in the quantum setting it is impossible to achieve bit commitment and oblivious transfer in an unconditionally secure way. Damgaard et al. (FOCS '05, CRYPTO '07) showed how to securely implement bit commitment and oblivious transfer in a weaker setting called the bounded-quantum-storage model, where the adversary is only allowed to store a limited number of qubits. However, their security definitions did only apply to the standalone setting, and it was not clear if their protocols could be composed.

We first give a simple attack that shows that these protocols (without any refinements to the model) are not composable. Then, we present a new, simulation-based definition for security in the bounded-quantum-storage model, and show that this definition allows for sequential composition of protocols. Finally, we prove the security of the randomized oblivious transfer protocol in our refined model. Secure implementations of oblivious transfer and bit commitment then follow easily by a (classical) reduction to randomized oblivious transfer.

This is joint work with Stephanie Wehner.

A. Lysyanskaya, Brown U.

The main idea of electronic cash (ecash) is that, even though the same party (a Bank) is responsible for giving out electronic coins, and for later accepting them for deposit, the withdrawal and the spending protocols are designed in such a way that it is impossible to identify when a particular coin was spent. I.e., the withdrawal protocol does not reveal any information to the Bank that would later enable it to trace how a coin was spent. Since a coin is represented by data, and it is easy to duplicate data, an electronic cash scheme requires a mechanism that prevents a user from spending the same coin twice (double-spending), for example by identifying double-spenders and tracing all transactions that they have carried out. Therefore, ecash is an example of balancing anonymity with accountability: a user remains anonymous until she violates a particular condition (spends a coin more than once).

In this talk, I will give several examples of balancing anonymity with accountability along these lines. First, I will first present a scheme that allows a user to withdraw a wallet, such that the user can spend N coins anonymously, but will get identified should she spend N +1 times. (The complexity of each operation here only has a logarithmic dependence on N.) Then I will show that this can be extended to, for example, allow a user to spend at most M coins anonymously with a particular merchant, or, as another example, spend at most L coins anonymously in one day.

Based on joint papers with Jan Camenisch, Susan Hohenberger, Markulf Kohlweiss, and Mira Meyerovich.

R. Phan, Loughborough U.

We revisit the security of block ciphers in the sense of adversarial distinguishers triggered by recent results of Knudsen and Rijmen, which supports the approach of viewing block ciphers outside the conventional cryptanalytic context of being standalone encryption primitives. We discuss some notions to capture security in this sense, and give some results applied to reduced rounds of the AES.

Part of this work is jointly with Marine Minier.

W. Castryck, K.U. Leuven

Since the introduction of ECC, a number of alternatives to the Weierstrass form have been proposed to speed up elliptic curve arithmetic: Hessian forms, Jacobian forms, Jacobi quartic forms, Montgomery forms, Edwards forms, ... We will show that all these models naturally belong to the family of so-called toric forms. These split up in sixteen equivalence classes, five of which are represented by the examples given above. Investigating the nine other classes should either result in new useful forms, either provide evidence for the optimality of Edwards arithmetic.

S. Galbraith, RHUL

I will discuss elliptic curve point compression in the context of pairing based cryptography. I will also present a method to speed up point multiplication in pairing friendly groups.

F.-X. Standaert, U.C. Louvain

The fair evaluation and comparison of side-channel attacks and countermeasures has been a long standing open question, limiting further developments in the field. Motivated by this challenge, this work proposes a framework for the analysis of cryptographic implementations that includes a theoretical model and an application methodology. The model is based on weak and commonly accepted hypotheses about side-channels that computations give rise to. It allows quantifying the effect of practically relevant leakage functions with a combination of security and information theoretic metrics, respectively measuring the quality of an implementation and the strength of an adversary. From a theoretical point of view, we demonstrate formal connections between these metrics and discuss their intuitive meaning. From a practical point of view, the model implies a unified methodology for the analysis of side-channel key recovery. The proposed solution allows getting rid of most of the subjective parameters that were limiting previous specialized and often ad hoc approaches in the evaluation of physically observable devices. It typically determines the extent to which basic (but practically essential) questions such as ``How to compare two implementations?" or ``How to compare two side-channel adversaries?" can be fairly answered. In order to justify the interest and relevance of these theoretical results, we will cover a few applications of the model, based upon both simulations and real measurements of cryptographic hardware devices.

Joint work with Tal Malkin and Moti Yung.

M. Ryan, U. Birmingham

We investigate the composition of protocols that share a common secret. This situation arises when users employ the same password on different services. More precisely we study whether resistance against guessing attacks composes when the same password is used. We model guessing attacks using a definition based on static equivalence in a cryptographic process calculus close to the applied pi calculus. We show that resistance against guessing attacks composes in the presence of a passive attacker. However, composition does not preserve resistance against guessing attacks for an active attacker. We therefore propose a simple syntactic criterion under which we show this composition to hold. Finally, we present a protocol transformation that ensures this syntactic criterion and preserves resistance against guessing attacks.

Joint work with Stephanie Delaune and Steve Kremer

J. Groth, U.C.L.

Non-interactive zero-knowledge proofs make it possible to prove the truth of a statement without revealing any other information. Non-interactive zero-knowledge proofs have been used widely in the theory of cryptography, but due to efficiency problems have not yet found many practical applications. In recent joint works with Rafail Ostrovsky and Amit Sahai we have found new techniques from pairing-based cryptography that yield efficient non-interactive zero-knowledge proofs. In this talk, we will go over some of these new and efficient non-interactive zero-knowledge proofs.

A. O'Neill, Georgia Tech.

A public-key encryption scheme is said to be deterministic if its encryption algorithm does not use random coins. Deterministic encryption has applications to database and file system encryption, electronic voting, and more. However, until recently, the only known security definition achievable by a deterministic encryption scheme was one-wayness, which is a rather weak notion from a modern cryptographic standpoint. A series of recent works has addressed this issue and proposed new and stronger security definitions for deterministic encryption, shown some interesting relations between them, and given constructions in both the random oracle and standard models. In this talk I will try to give an overview of these results and highlight what I feel are interesting directions for future research.

S. Drimer, Cambridge

We can divide "FPGA design security" into two problems. The first is protecting data stored and processed inside of the FPGA, and the second is protecting configuration data as it is distributed from system-local storage or remote sources. I will introduce the FPGA security framework, threat model, and discuss possible solutions for some of the open issues to do with distributing FPGA designs, especially when cores from multiple distrusting sources are integrated into a single design. In addition, I will use a recent AES implementation to discuss how to better report FPGA design results for the purpose of comparisson, validation, and re-use.

This talk is based on the following papers:

- http://www.cl.cam.ac.uk/~sd410/papers/fpga_security.pdf
- http://www.cl.cam.ac.uk/~sd410/papers/protect_many_cores.pdf
- http://www.cl.cam.ac.uk/~sd410/papers/aes_dsp.pdf

M. Fischlin, T.U. Darmstadt

Sanitizable signature schemes, as defined by Ateniese at al. (ESORICS 2005), allow a signer to partly delegate signing rights to another party, called the sanitizer. That is, the sanitizer is able to modify a predetermined part of the original message such that the integrity and authenticity of the unchanged part is still verifiable. Ateniese et al. identify five security requirements for such schemes (unforgeability, immutability, privacy, transparency and accountability) but do not provide formal specifications for these properties. They also present a scheme allegedly satisfying these requirements.

Here we revisit the security requirements for sanitizable signatures and, for the first time, present a comprehensive formal treatment. Besides a full characterization of the requirements we also investigate the relationship of the properties, showing for example -diametrically to a claim by Ateniese et al.- that unforgeability follows from accountability. We then provide a full security proof for a modification of the original scheme according to our model.

This is joint work with Christina Brzuska, Tobias Freudenreich, Anja Lehmann, Marcus Page, Jakob Schelbert, Dominique Schroeder and Florian Volk (TU Darmstadt).

Y. Desmedt, UCL

The problem of a mice traveling through a maze is well known. The maze can be represented using a planar graph. We present a variant of the maze. We consider a grid vertex colored planar graph in which an adversary can choose up to t colors and remove all vertices that have these colors and their adjacent edges. We call the grid in which these vertices and adjacent edges are removed a reduced grid. The problem is that a mice must be able to move in the reduced grid from the first row to the last row, and from the first column to the last column, and this for all possible reductions. We present three types of solutions to construct such grids. The efficiency of these solutions is discussed.

The problem finds its origin in the problem of secure multiparty computation. Imagine going to a medical doctor who needs to prescribe some medication, which might be counterindicated. The typical solution is to disclose all medical records to the doctor. If secure multiparty computation would be used, the medical doctor only learns from the distributed medical databases whether the medication is, or is not, counterindicated. We consider the problem of parties each having a secret belonging to a non-abelian group. The parties want to compute the product of these secrets without leaking anything that does not follow trivially from the product. Our solution is black box, i.e., independent of the non-abelian group. This has applications to threshold block ciphers and post-quantum cryptography.

This is joint work with Josef Pieprzyk, Ron Steinfeld and Huaxiong Wang. Parts of it were presented at Crypto 2007 and appeared in the proceedings.

G. Fuchsbauer, ENS

We define a general model for consecutive delegations of signing rights with the following properties: The delegatee actually signing and all intermediate delegators remain anonymous. As for group signatures, in case of misuse, a special authority can open signatures to reveal the chain of delegations and the signer's identity. The scheme satisfies a strong notion of non-frameability generalizing the one for dynamic group signatures. We give formal definitions of security and show them to be satisfiable by constructing an instantiation proven secure under general assumptions in the standard model. Our primitive is a proper generalization of both group signatures and proxy signatures and can be regarded as non-frameable dynamic hierarchical group signatures.

C. Fournet, Microsoft

Joint work with Tamara Rezk and Gurvan le Guernic (MSR-INRIA Joint Centre http://msr-inria.inria.fr/projects/sec)

We relate two notions of security: one simple and abstract, based on information flows in programs, the other more concrete, based on cryptography.

In language-based security, confidentiality and integrity policies specify the permitted flows of information between parts of a system with different levels of trust. These policies enable a simple treatment of security, but their enforcement is delicate.

We consider cryptographic enforcement mechanisms for distributed programs with untrusted components. Such programs may represent, for instance, distributed systems connected by some untrusted network. We develop a compiler from a small imperative language with locality and security annotations down to cryptographic implementations in F#. In source programs, security depends on a policy for reading and writing the shared variables. In their implementations, shared memory is unprotected, and security depends instead on encryption and signing.

We rely on standard primitives and hypotheses for cryptography, stated in terms of probabilistic polynomial-time algorithms and games. Relying on a new type system, we show that our compiler preserves all information-flow properties: an adversary that interacts with the trusted components of our code and entirely controls its untrusted components gains illegal information only with negligible probability.

S. Balfe, RHUL

In both the commercial and defence sectors a compelling need is emerging for the rapid, yet secure, dissemination of information across traditional organisational boundaries. In this talk we present a novel trust management paradigm for securing pan-organisational information flows that aims to address the threat of information disclosure. Our trust management system is built around an economic model and a trust-based encryption primitive wherein: (i) entities purchase a key from a Trust Authority (TA) which is bound to a voluntarily reported trust score r, (ii) information flows are encrypted such that a flow tagged with a recipient trust score R can be decrypted by the recipient only if it possesses the key corresponding to a voluntarily reported score r \le R, (iii) the economic model (the price of keys) is set such that a dishonest entity wishing to maximise information leakage is incentivised to report an honest trust score r to the TA.

This talk covers two important contributions. First, we quantify fundamental tradeoffs on information flow rate, information leakage rate and error in estimating recipient trust score R. Second, we present a suite of encryption schemes that realise our trust-based encryption primitive and identify computation and communication tradeoffs between them.

E. Gallery, RHUL

This talk will explore the design and operation of a trusted mobile platform, as defined within the Trusted Computing Group’s Mobile Phone Working Group.

T. Rezk, INRIA Sophia-Antipolis

The area of provable or computational cryptography that started to develop since the early 90s proposes that the only assumptions on the adversary that can be justified are his/her computational abilities. Reasoning with arbitrary adversaries involves the handling of polynomial and probabilistic hypotheses on complex programs. Can we let the programmer specify security policies at an abstract level and let the compiler implement them using provable cryptography? This presentation will describe work in progress with Cedric Fournet and Gurvan Le Guernic on programming language abstractions for provable cryptography, and a compiler that implements it.

D. Catalano, Universita di Catania

We propose a methodology to construct verifiable random functions from a class of identity based key encapsulation mechanisms (IB-KEM) that we call VRF suitable. Informally, an IB-KEM is VRF suitable if it provides what we call unique decryption (i.e. given a ciphertext C produced with respect to an identity ID, all the secret keys corresponding to identity ID', decrypt to the same value, even if ID\neq ID') and it satisfies an additional property that we call pseudorandom decapsulation. In a nutshell, pseudorandom decapsulation means that if one decrypts a ciphertext C, produced with respect to an identity ID, using the decryption key corresponding to any other identity ID' the resulting value looks random to a polynomially bounded observer. Interestingly, we show that most known IB-KEMs already achieve pseudorandom decapsulation. Our construction is of interest both from a theoretical and a practical perspective. Indeed, apart from establishing a connection between two seemingly unrelated primitives, our methodology is direct in the sense that, in contrast to most previous constructions, it avoids the inefficient Goldreich-Levin hardcore bit transformation.

C. Mitchell, RHUL

The value of authentication protocols which minimise (or even eliminate) the need for stored state in addressing DoS attacks is well-established - note in particular the 1997 paper of Aura and Nikander. However, although there is now a substantial literature on this topic, it would seem much remains to be explored. In this talk we explore the design of a novel stateless protocol, which has certain implementation advantages. Specifically, neither party needs to maintain significant stored state.

B. Brumley, TKK

We show how to steal private keys in OpenSSL's implementation of elliptic curve cryptography. We accomplish this using a cache attack that exploits memory access patterns. This is automated by a new framework for cache-timing data analysis making use of vector quantization and hidden Markov model cryptanalysis. In general, it facilitates recovery of critical algorithm state such as key material.

D. Naccache, ENS

In addition to its usual complexity assumptions, cryptography silently assumes that information can be physically protected in a single location. As we now know, real-life devices are not ideal and confidential information leaks through different physical channels. Whilst most aspects of side channel leakage (cryptophthora) are now well understood, no attacks on totally unknown algorithms are known to date. This talk escribes such an attack. By totally unknown we mean that no information on the algorithm's mathematical description (including the plaintext size), the microprocessor or the chip's power consumption model is available to the attacker.

D. Fiore, U. Catania

Onion routing protocols allow users to establish anonymous channels to preserve their privacy over a public network. Several protocols implementing this primitive have been proposed in recent years, and TOR, a real-life implementation, provides an onion routing service to thousands of users over the internet.

This paper presents

N. Hanley, U. Cork

In power analysis, template attacks can be used to recover secret key information from a device with very few power traces. They work on the premise that the attacker has an identical device to the one under attack, and using this, builds up a set of "templates" to profile the power consumption of the device. These templates can then be used to recover secret information from the attack device. Generally, templates are combined with known plaintext or ciphertext information to recover the secret key, In this talk it is demonstrated how the secret key can be recovered using just the power consumption traces, and practical results on an ARM7 mircoprocessor are shown.

G. Steel, INRIA LSV-Cachan

International standards dictate that all processing of customer Personal Identification Numbers (PINs) in the international cash machine network must take place inside special tamper resistant Hardware Security Modules (HSMs). These HSMs have a restricted API designed so that even if an attacker is able to make arbitrary command calls to the device, no customer PINs can be obtained. However, in recent years, a number of attacks have been found on these APIs. Some of them are so-called differential attacks, whereby an attacker makes repeated calls to the API with slightly different parameters, and from the pattern of error messages received, he is able to deduce the value of a PIN. In this talk, I will present some of these attacks, and talk about efforts to analyse them formally. This will include techniques for proving the absence of such attacks in patched APIs, and a practical proposal for improving the security of the network without making large-scale changes to the current infrastructure.

T. Burrell, Microsoft.

The Security Science team within Microsoft's Trustworthy Computing (TwC) organization aims to find ways to do security smarter, and then enable others to leverage that work. This talk is a view into the exploit mitigations work the Security Science team does, i.e., how do we minimize the security impact of any residual vulnerabilities in both Microsoft and non-Microsoft software; I'll focus on what we've done, why we've done it and how we systematically think about mitigations coverage. I'll describe in more detail an example of such a mitigation in recently released Windows 7 and how we've updated stack buffer overflow protection in the upcoming Visual Studio 2010 release.

B. Thomas, SiVenture.

A criticism often levelled at security certification schemes is that economic incentives are misaligned and create a “race to the bottom” where the market favours evaluators who are less thorough. This seminar will examine this claim from the perspective of applying the Common Criteria scheme to smartcards and other tamper-resistant hardware, and will give:

- A brief introduction to the Common Criteria, and its application to smartcards
- An outline of the commonly heard criticisms of security certification
- Arguments of why some of these criticisms are less applicable to hardware certifications
- Discussion of factors that encourage stringent evaluations (i.e. what prevents the race to the bottom)
- Discussion of the factors that limit the assurance that can be achieved in practice (i.e. why racing to the top is sometimes infeasible)
- Debate of how / where the current arrangements might be improved.

S. Tillich, U. Bristol.

Resistance against side-channel analysis (SCA) attacks is an important requirement for many secure embedded systems. Microprocessors and microcontrollers which include suitable countermeasures can be a vital building block for systems like secure PDAs or smart phones. In the last years, two promising concepts have been proposed for adding SCA-protection to general-purpose processors: Randomized execution and hardware-protected execution. Investigations of the first concept have been in connection with the NONDET processor. The second concept has first been proposed in the context of securing cryptographic instruction set extensions and has been extended in the POWER-TRUST project. We present the principals of both concepts, giving specific details for the hardware-protected execution, highlighting features like multi-tasking support and process separation. Furthermore, opportunities for integrating both concepts as well as combinations with other SCA countermeasures are presented.

A. Kiayias, U. Athens.

Nowadays it is widely accepted to formulate the security of a protocol carrying out a given task via the ``trusted-party paradigm,'' where the protocol execution is compared with an ideal process where the outputs are computed by a trusted party that sees all the inputs. A protocol is said to securely carry out a given task if running the protocol with a realistic adversary amounts to ``emulating'' the ideal process with the appropriate trusted party. While this simulation-based security formulation provides strong security guarantees, its usefulness is contingent on the properties and correct specification of the trusted party's program (also known as an ideal functionality), which, as demonstrated in recent years by the coexistence of complex, multiple functionalities for the same task as well as by their ``unstable'' nature, does not seem to be an easy task.

In this talk we address this problem, and describe a recent general methodology for the sound specification of ideal functionalities for cryptographic tasks. First, we introduce the class of canonical ideal functionalities for a cryptographic task, which unifies the syntactic specification of a large class of cryptographic tasks under the same basic template functionality. By endowing the class of canonical functionalities with an algebraic structure we are able to perform operations among functionalities that amount to combinations of security and consistency properties. The constituent functionalities of a task can be derived either directly or, following a translation strategy that will be introduced, from existing "game-based" definitions; such definitions have in many cases captured desired individual properties of cryptographic tasks, albeit in less adversarial settings. We will showcase the methodology by commenting how it applies to a variety of basic cryptographic tasks, including commitments, digital signatures, zero-knowledge proofs, and oblivious transfer.

M. Ruckert, TU Darmstadt.

Encryption and signature schemes based on worst-case lattice problems are promising candidates for the post-quantum era, where classic number-theoretic assumptions are rendered false. Although there have been many important results and breakthroughs in lattice cryptography, the question of how to systematically choose secure parameters in practice is still open. This is mainly due to the fact that most security proofs are essentially asymptotic statements. In addition, the hardness of the underlying complexity assumption is controlled by several interdependent parameters rather than just a simple bit length as in classic schemes. With our work, we close this gap by providing a handy framework for estimating secure parameter sets by relating the hardness of practical lattice basis reduction to symmetric bit security for the first time. Our approach takes various security levels, or attacker types, into account. Moreover, we use it to predict long-term security in a similar fashion as the results that are collected on www.keylength.com. We apply our framework to essentially all published encryption and signature schemes that are based on the learning with errors problem (LWE) or the small integer solution problem (SIS), respectively. Moreover, our results are easily applicable to all modern lattice-based cryptography, such as identity-based encryption or collision-resistant hash functions.

P. Farshim, RHUL.

Bellare and Kohno introduced a formal framework for the study of related-key attacks against block ciphers. They established sufficient conditions (output-unpredictability and collision-resistance) on the set of related-key derivation (RKD) functions under which the ideal cipher is secure against related-key attacks, and suggested this could be used to derive security goals for real block ciphers. However, to do so requires the reinterpretation of results proven in the ideal cipher model for the standard model (in which a block cipher is modelled as, say, a pseudorandom permutation family). As we show here, this is a fraught activity. In particular, building on a recent idea of Bernstein, we first demonstrate a related-key attack that applies generically to a large class of block ciphers. The attack exploits the existence of a short description of the block cipher, and so does not apply in the ideal cipher model. However, the specific RKD functions used in the attack are provably output-unpredictable and collision-resistant. In this sense, the attack can be seen as providing a separation between the ideal cipher model and the standard model. Second, we investigate how the related-key attack model of Bellare and Kohno can be extended to include sets of RKD functions that themselves access the ideal cipher. Precisely such related-key functions underlie the generic attack, so our extended modelling allows us to capture a larger universe of related-key attacks in the ideal cipher model. We establish a new set of conditions on related-key functions that is sufficient to prove a theorem analogous to the main result of Bellare and Kohno, but for our extended model. We then exhibit non-trivial classes of practically-relevant RKD functions meeting the new conditions. We go on to discuss standard model interpretations of this theorem, explaining why, although separations between the ideal cipher model and the standard model still exist for this setting, they can be seen as being much less natural than our previous separation. In this manner, we argue that our extension of the Bellare- Kohno model represents a useful advance in the modelling of related-key attacks. Third, we consider the topic of key recovering related-key attacks and its relationship to the Bellare-Kohno formalism. In particular, we address the question of whether lowering the security goal by requiring the adversary to perform key-recovery.

J.-P. Degabriele, RHUL

IPsec allows a huge amount of flexibility in the ways in which its component cryptographic mechanisms can be combined to build a secure communications service. This may be good for supporting different security requirements but is potentially bad for security. We demonstrate the reality of this by describing efficient, plaintext-recovering attacks against all configurations of IPsec in which integrity protection is applied prior to encryption - so-called MAC-then-encrypt configurations. We report on the implementation of our attacks against a specific IPsec implementation, and reflect on the implications of our attacks for real-world IPsec deployments as well as for theoretical cryptography.

E. Quaglia, RHUL.

In this talk we will introduce and explore the new concept of Time-Specific Encryption (TSE). In (Plain) TSE, a Time Server broadcasts a key at the beginning of each time unit, a Time Instant Key (TIK). The sender of a message can specify any time interval during the encryption process; the receiver can decrypt to recover the message only if it has a TIK that corresponds to a time in that interval. We extend Plain TSE to the public-key and identity-based settings, there receivers are additionally equipped with private keys and either public keys or identities, and where decryption now requires the use of the private key as well as an appropriate TIK. We will briefly introduce security models for the plain, public-key and identity-based settings. We also provide constructions for schemes in the different settings, showing how to obtain Plain TSE using identity-based techniques, how to combine Plain TSE with public-key and identity-based encryption schemes, and how to build schemes that are chosen-ciphertext secure from schemes that are chosen-plaintext secure. We will also suggest applications for our new primitive, and discuss its relationships with existing primitives, such as Timed-Release Encryption and Broadcast Encryption.

J. Bos, EPFL.

The Cell broadband engine (Cell) architecture is the heart of the PlayStation 3 (PS3) game console. In this presentation the details of the Cell architecture are outlined together with implementation techniques how to speed up implementations on this platform. Two projects performed on the 215 PS3s of the laboratory for cryptologic algorithms are discussed; how our Pollard rho implementation solved an 112-bit prime elliptic curve discrete logarithm problem and how our elliptic curve factorization method (ECM) implementation aimed at Mersenne numbers (numbers of the form 2^x - 1) found the current ECM 73-digit record factor.

E. Kasper, Cambridge

Forward secure key exchange ensures that the compromise of a long-term secret key used to derive session keys does not affect the security of past sessions established using that key. In TLS (SSL), forward security can be obtained via ephemeral Diffie-Hellman ciphers.

I'll present an implementation of the NIST P-224 elliptic curve for 64-bit processors. NIST P-224 is a standardized curve that can be used in Diffie-Hellman key exchange, as well as ECDSA digital signatures; the new implementation is over 4 times faster than standard OpenSSL, and in addition, resistant to timing attacks. I'll also discuss various latency reduction mechanisms in TLS, and their effect on forward security.

This work was done during an internship at Google Switzerland, and contributed to the OpenSSL project.

G. French, Barclays

This talk will discuss issues related to payment system security, covering topics such as EMV and CAP. The talk will also look at possible future open research directions in this area.

C. Boyd, QUT

Security models for key exchange have been around for many years, but only recently have started to include consideration of denial-of-service attacks. This talk will consider security models for client puzzles and in particular introduce a new model to be presented at CT-RSA 2011. The new model incorporates the possibility that an adversary may attack multiple puzzles simultaneously. In addition we will consider the notion of gradual authentication as applied to key exchange and introduce a new mechanism combining client puzzles and digital signatures with fast verification. This is joint work with Juan Gonzalez, Lakshmi Kuppusamy, Jothi Rangasamy and Douglas Stebila.

M. Tibouchi, ENS-Paris

At Eurocrypt 2010, van Dijk et al. described a fully-homomorphic encryption scheme over the integers. The main appeal of the scheme (compared to Gentry's) was its conceptual simplicity; however with a public key size in O(\lambda^{10}), where lambda is the security parameter, the scheme looked completely unpractical. We show how to reduce the public key size down to O(\lambda5), and we describe a practical implementation of the scheme.

Interestingly we obtain significantly better efficiency than the recent Gentry-Halevi implementation of Gentry's scheme, despite using very modest hardware. At the "high" security level, equivalent to the "large" setting of Gentry-Halevi, we obtain an encryption in about 3 minutes and a bootstrapping operation in about 5 minutes, while key generation and decryption take negligible time. Public-key size is also moderate: 250 MB at the "high" level (almost 10 times smaller than Gentry-Halevi).

B. Wyseur, Nagravision

Block ciphers are traditionally used to secure communication against eavesdropping. Their security is based on the confidentiality of a secret key, and designed to withstand black-box types of attacks. For such a model to apply, the cipher implementation must be deployed on a trusted execution platform. In practice however, such a model often does not apply -- for example in a DRM scenario, where a system owner has an incentive to extract secret keys from the software implementation of the DRM client. White-box cryptography aims to address this issue: it is the discipline of implementing cryptographic algorithms such that it is protected from 'white-box' attacks.

This talk will cover practical aspects in white-box cryptography. The first part will explore the initial constructions and focus in particular on implementations of the AES. While this approach is very useful for many practical use cases, subsequent cryptanalysis results will show that the initial strategy in white-box cryptography is fundamentally flawed; it is prone to differential and algebraic cryptanalysis. The second part of this talk will focus on newer approaches: constructions that aim to withstand public scrutiny and aim to solve some practical deployment issues.

K. Jarvinen, Helsinki

Implementation attacks form a serious threat to cryptosystems in practice. In CRYPTO 2008, Goldwasser et al. introduced one-time programs as a solution to provide resilience against all implementation attacks. One-time programs utilize Yao's garbled circuits and simple tamper-proof hardware tokens called one-time memories. Our paper published in CHES 2010 was the first effort to evaluate one-time programs in practice. It was concluded that executing one-time programs is expensive in terms of memory and computation time, but they could be feasible alternatives in applications requiring very high resistivity against implementation attacks. The work focused on the computational part of executing one-time programs. In this talk we explore the steps that are still needed before one-time programs can be (efficiently) used in practice. Specifically, we focus on improving computation speed and discuss implementation of one-time memory.

V. Iovino, U. Salerno

Predicate encryption is an important cryptographic primitive that has found wide applications as it allows for fine-grained key management. In a predicate encryption scheme for a class P of predicates, the owner of the master secret key can derive a secret key Sk_P for any predicate P in some class of predicates. Similarly, when encrypting plaintext M, the sender can specify an attribute vector x for the ciphertext Ct. Then, key Sk_P can decrypt all ciphertexts Ct with attribute vector x such that P (x) = 1. In this paper, we give fully secure implementations for Conjunctions (also called Hidden Vector Encryption in the literature), Disjunctions and k-CNF/DNF predicates that guarantee the security of the plaintext and of the attribute. Our constructions for Disjunctions and Conjunctions are linear in the number of variables. Previous fully secure constructions for Disjunction required time exponential in the number of variables while for Conjunctions the best previous construction was quadratic in the number of variables. We also give reduction of k-CNF and k-DNF formulae to Conjunctions for any constant k. In proving the full security of our constructions, we elaborate on the recent paradigm of dual encryption system introduced in [Waters – Crypto 2009] resulting in a simplified proof strategy that could be of independent interest. We also give a hierarchical version of our scheme that has as special case Anonymous HIBE.

D. Ashenden, U. Cranfield

It has been a busy eighteen months in the UK for the seemingly ubiquitous term *cyber*.
It started with the publication of the UK's Cyber Security Strategy in 2009, closely followed
by the establishment of the Office of Cyber Security and the Cyber Security Operations Centre.
Momentum gathered and in the media we have seen reporting of the Stuxnet worm as an instance
of *cyber war*.
Also in recent months we have had speeches from the Director General of the Security Service
and the Director of GCHQ discussing *cyber espionage* and *cyber security* respectively.
Finally we now have the Strategic Defence and Security Review categorising cyber security
as a Tier One threat and outlining plans to invest 650 million pounds on a *National Cyber
Security Programme* over the next four years. It is a busy time for *cyber*.

This seminar will explore meanings of *cyber* in the current context before examining types of
cyber risk.
Strategies for cyber security will be discussed and examples of incidents used as a way of
illustrating some of the difficulties to be overcome.
Finally the implications for academic research will be explored.

B. Collins, Chief Scientific Advisor at BIS and DfT

At BIS Brian provides scientific advice to the Security of State, Ministers and the Management Board, ensuring the use of sound scientific, engineering and technological evidence across the Department. He is also CSA at the DfT, and Professor of Information Systems at Cranfield University. Previously he has held senior positions at Clifford Chance, the Wellcome Trust, and was Chief Scientist and Technical Director at GCHQ and Deputy Director at the Royal Signals and Radar Establishment (RSRE).

M. Kohlweiss, Microsoft

We present the first CCA2 secure public key encryption scheme that is structure preserving, i.e., our encryption scheme uses only algebraic operations. In particular it does not use hash-functions or interpret group elements as bit-strings. This makes our scheme a perfect building block for cryptographic protocols where parties for instance want to prove, to each other, properties about ciphertexts or jointly compute ciphertexts. We also provide a few example protocols for our scheme, for instance a joint computation of a ciphertext of a plaintext shared by two parties, where in the end, only one of the parties learns the ciphertext. This latter protocol serves as a building block for our second contribution which is a set of protocols that implement the concept of oblivious trusted third parties. This concept has been proposed before, but no concrete realization or formal definition of the problem was known.

P. Grabher, U. Bristol

Instruction Set Extensions (ISEs) supplement a host processor with special-purpose, typically fixed-function hardware and instructions to use it. For cryptographic use-cases, this is often very effective because of the need for non-standard or niche operations. However, one disadvantage is that of inflexibility, contradicting a need for algorithm agility. Using a LEON3-based prototype we explore a different approach, namely provision of re-configurable mechanisms to support dynamic (i.e., changeable at run-time) ISEs. Our results show that such an approach can provide a flexible general-purpose platform capable of supporting previous work with similar advantages, but relies on careful analysis of associated security issues.

A. Cullen, BAESystems

This is a presentation of work carried out at BAE Systems Advanced Technology Centre (ATC) to assess Attribute Based Encryption (ABE) as a technology that supports fine-grained access control, motivated by its potential to mitigate problems such as the WikiLeaks release of a substantial diplomatic archive in 2010.

The presentation will commence with a brief overview of ABE but is intended for an audience that is already familiar with the technology. Details of the mathematics are available in the literature and this presentation will include key references.

ATC implemented an ABE demonstration, including both the Key Policy and Ciphertext Policy variants, using the MIRACL library to support the mathematical operations. The presentation will describe the demonstration and implementation experience, and summarise the main performance measurements such as encryption and decryption times. The presentation will then review the ABE capability, indicating both the best features and areas where further developments are desirable.

Encrypted file storage with fine-grained access policies is largely new in corporate networks so the presentation will discuss some of the integration issues that need to be solved before adoption is likely to become widespread in this context. Finally the presentation will conclude by indicating the key research challenges that have emerged from the work at ATC.

R. Ghanea-Hercock, BT

This talk is about shifting mind sets. The missing piece in our mental model of network security is that it is fundamentally a Complex Adaptive System (CAS). This perspective changes everything from how we react to threats, to how we should plan future defence strategies. The prevailing view considers the interaction between people, process and technology, but still fixates on the fallacious notion that the sum can be captured in a static snapshot, which can then be solved. The truth is this is an eternal arms race with no end, ever. Hence, we need to design defences that adapt, move and react in real time. The ideal is a fully autonomous, self-organising defence system that learns and adapts to real-time threats.

O. Pereira, U.C. Louvain

The Helios voting system is an open-audit web-based voting system that has been used by various institutions in real-stake elections during the last few years. While targeting the simplicity of the election workflow, the homomorphic tallying process used in Helios limits its suitability for many elections (large number of candidates, specific ballot filling rules, ...).

We present a variant of Helios that allows an efficient mixnet-based tallying procedure, and document the various choices we made in terms of election workflow and algorithm selection. In particular, we propose a modified version the TDH2 scheme of Shoup and Gennaro that we found particularly suitable for the encryption of the ballots.

Our Helios variant has been tested during two multi-thousand voter elections. The lessons taken from the first of these elections motivated some changes into our procedure, which have been successfully experimented during the second election.

This is joint work with Philippe Bulens and Damien Giry.

E. Freire, RHUL

Non-interactive key exchange (NIKE) is a fundamental but much-overlooked cryptographic primitive. It appears as a major contribution in the ground-breaking paper of Diffie and Hellman, but, with the exception of appearing briefly in a paper by Cash, Kiltz and Shoup (Eurocrypt 2008), NIKE has remained almost completely unstudied since then. In this paper, we provide different security models for this primitive and explore the relationships between them. We then give a construction for secure NIKE in the standard model that is based on the hardness of a variant of the decisional Bilinear Diffie Hellman Problem for asymmetric pairings. We also study the relationship between NIKE and public key encryption (PKE), showing that a secure NIKE scheme can be generically converted into an IND-CCA secure PKE scheme. Using our concrete NIKE scheme, this leads to a new PKE scheme that is competitive with, for example, the BMW encryption scheme. Our conversion also illustrates the fundamental nature of NIKE in public key cryptography.

A. Choudhary, U. Bristol

The round complexity of interactive protocols is one of their most important complexity measures. In this work we prove that existing lower bounds for the round complexity of VSS can be circumvented by introducing a negligible probability of error in the reconstruction phase. Previous results show matching lower and upper bounds of three rounds for VSS, with n = 3t + 1, where the reconstruction of the secrets always succeeds, i.e. with probability 1. In contrast we show that with a negligible probability of error in the reconstruction phase:

- There exists an efficient 2-round VSS protocol for n = 3t + 1. If we assume that the adversary is non-rushing then we can achieve a 1-round reconstruction phase.
- Tere exists an efficient 1-round VSS for t = 1 and n > 3.
- We prove that our results are optimal both in resilience and number of sharing rounds by showing:
- (a) There does not exist a 2-round WSS (and hence VSS) for n ≤ 3t.
- (b) There does not exist a 1-round VSS protocol for t ≥ 2 and n ≥ 4.

The talk will have two parts. In the first part I will briefly talk about the kind of problems that I looked into during my PhD. The second part will be related to the Crypto paper.

S. Thomson, RHUL

We revisit the topic of joint security for combined public key schemes, wherein a single keypair is used for both encryption and signature primitives in a secure manner. While breaking the principle of key separation, such schemes have attractive properties and are sometimes used in practice. We give a general construction for a combined public key scheme having joint security that uses IBE as a component and that works in the standard model. We provide a more efficient direct construction, also in the standard model. We then consider the problem of how to build signcryption schemes from jointly secure combined public key schemes. We provide a construction that uses any such scheme to produce a triple of schemes - signature, encryption, and signcryption - that are jointly secure in an appropriate and strong security model.

Joint work with Kenny Paterson, Jacob Schuldt and Martijn Stam

P. Scholl, U. Bristol

A key problem with the original implementation of the Gentry Fully Homomorphic Encryption scheme was the slow key generation process. Gentry and Halevi provided a fast technique for $2$-power cyclotomic fields. We present an extension of the Gentry--Halevi key generation technique for arbitrary cyclotomic fields. Our new method is roughly twice as efficient as the previous best methods. Our estimates are backed up with experimental data.

C. Fournet, Microsoft

Type systems are effective tools for verifying the security of cryptographic programs. They provide automation, modularity and scalability, and have been applied to large security protocols. However, they traditionally rely on abstract assumptions on the underlying cryptographic primitives, expressed in symbolic models. Cryptographers usually reason on security assumptions using lower level, computational models that precisely account for the complexity and success probability of attacks. These models are more realistic, but they are harder to formalize and automate.

We present the first modular automated program verification method based on standard cryptographic assumptions. We show how to verify ideal functionalities and protocols written in ML by typing them against new cryptographic interfaces using F7, a refinement type checker coupled with an SMT-solver. We develop a probabilistic core calculus for F7 and formalize its type safety in Coq.

We build typed module and interfaces for MACs, signatures, and encryptions, and establish their authenticity and secrecy properties. We relate their ideal functionalities and concrete implementations, using game-based program transformations behind typed interfaces. We illustrate our method on a series of protocol implementations.

Joint work with Markulf Kohlweiss and Pierre-Yves Strub.

M. Stam, U. Bristol

We construct a new compression function H from 3n bits to 2n bits that uses two parallel calls to an ideal primitive (an ideal blockcipher or a public random function) from 2n to n bits. This is similar to MDC-2 or the recently proposed MJH. However, unlike these proposals, we achieve collision resistance up to close to 2^(2n/3) queries. This considerably improves upon the existing trivial bound 2^(n/2) (e.g. for MDC-2). Our result relies on two technical innovations.

Firstly, we give an abstract treatment using game playing to describe common techniques employed to prove preimage and collision security of compression functions based on ideal primitives. This abstraction allows us to refine existing techniques and to prove stronger results than were hitherto known. We believe that our framework elucidates existing proofs and consider it of independent interest.

Secondly, the construction itself. A key challenge for our design is to bound the yield, that is the number of full compression function evaluations, in terms of the number of queries to the underlying primitives. To this end, we use an innovative preprocessing function C where any given valid input pair to the underlying ideal primitives corresponds to an incidence between a point and a line in the affine plane GF(2^n)^2. A classical result from discrete geometry--- the Szemerédi--Trotter theorem over finite fields---is subsequently applied to bound the number of these incidences, and with it the yield. An appropriate post-processing function together with a careful (non-trivial) analysis employing our new proof techniques then formally gives us the remarkable collision-resistance bound.

Joint work with Dimitar Jetchev and Onur Özen.

C. Cremers, ETH Zurich

Many recent advances in formal analysis of security protocols have extended the scope and precision of these models and the corresponding tools. However, the underlying adversary model has remained close to the original Dolev-Yao setting, where the adversary fully controls the network and can statically corrupt selected parties. In contrast, security models for AKE protocols have become stronger over time, giving the adversary more and more control over parties, ranging from dynamic corruption to compromise of the local state of parties and revealing the random numbers they generate. In this talk, we highlight some of our recent contributions to closing this gap, by extending the formal models as well as the corresponding tool support.

B. Groza, University of Timisoara

As systems are getting larger and more complex, manual analysis becomes tedious and automatic verification is a necessity. This has been often said; however, after successful automatic verification one should keep several issues in mind: models may have errors, model checkers may have bugs and moreover, not all attacks are easy (or even possible) to formalize. In my talk I will focus on two kinds of attacks for which there is little or no support, for automatic verification: guessing and resource exhaustion attacks. I will present our attempts to formalize these attacks in order to make them suitable for automatic verification in the AVANTSSAR project. We have defined guessing rules that are intended for supporting both the (traditional) off-line guessing as well as on-line guessing in the case when the attack cannot be detected by honest protocol participants. For resource exhaustion we have used cost-based rules and we distinguish between excessive but legal protocol use (e.g., flooding) and illegal protocol manipulation that causes participants to waste computation time without reaching the protocol goals. We also highlight attacks that are undetectable by the targeted honest agents or by all protocol participants (both in the case of guessing and resource exhaustion), and present results on practical examples.

T. Shrimpton, Portland State University

For more than a decade, researchers have considered the question of whether, or not, the TLS Record Protocol provides provable guarantees of data privacy and integrity. The progression of answers has been: no, not "generically"; yes, if one restricts to specific encryption modes of operation (CBC and CTR), but only under unrealistic assumptions about TLS in practice; yes, under fewer assumptions, but relative to a definition of security that is hard to understand. We'll explore some recent work that answers yes, relative to standard security definitions, and under much more realistic use assumptions. In fact, it can even achieve a bit more than we thought, length-hiding authenticated encryption, as long as an underlying authentication tag is not too short.

We'll then turn this nice positive result on its head, by considering the use of TLS (or any provably secure authenticated encryption scheme) as a client-proxy tunnel through which webpages are downloaded. We'll see that in this setting even a passive eavesdropper can accurately determine which webpage (out of a large, known universe) was downloaded. What's more, the passive attack is very simple, using only coarse measurements about the encrypted channel (such as total bandwidth of the data transfer), and it works even if sophisticated countermeasures are applied prior to sending the web data through the encrypted tunnel.

A. Ganesh, Uni Bristol

We study defence measures against scanning and model the interaction between worms and detectors as a game. We use epidemiological modelling to characterise the outcome of the game (the pay-off function), as a function of the strategies of the worm and the detector. We design detection rules that are optimal against scanning worms with known characteristics. We then identify specific detection rules that are close to optimal, in a mathematically precise sense, against any scanning worm.

Joint work with Dinan Gunawardena, Peter Key, Laurent Massoulie and Jacob Scott.

S. Tillich, Uni. Bristol

Everything is getting digital nowadays, so why should money be any different? This talk briefly reviews the development of money (digital and otherwise) and then dives into Bitcoin, an anonymous, decentralized digital currency based on cryptography. Apart from the purely technical aspects, we will also look at the "bigger picture" which shapes the ecosystem of this real-world system - trust, stability/volatility, and human behaviour. We'll also briefly look at another (centralized, non-anonymous) digital currency in order to flesh out the common properties of digital money in general.

T. Chothia, Uni. Birmingham

In this talk I will describe how mutual information and min-entropy can be used to quantify information leaks from secure systems.These measures provide meaningful definitions of information leakage that can be applied in a wide range of situations. Using statistical estimation to calculate these values makes it possible to use these techniques to directly test implemented systems. I'll discuss some results on the asymptotics of the estimates that make it possible to calculate confidence interval. As an example, I’ll discuss a time-based traceability attack against the RFID chip in e-passports.

This is joint work with Apratim Guha.

A. O'Neill, Boston University

We consider various security notions for functional encryption (FE), studying relations among them and their achievability. First, we show that indistinguishability (IND) and semantic security (SS) based notions of security are inequivalent for FE, in contrast to the classical setting of public-key encryption. Improving on a contemporaneous result of Boneh, Sahai, and Waters, we go on to show that SS based security is impossible to achieve due to its implicit incorporation of key-revealing selective-opening attacks (SOA-K). Motivated by this impossibility result, we then introduce several relaxations of the SS notion to capture sans-SOA-K semantic security and show they are equivalent to IND for important functionalities corresponding to existing forms of predicate encryption, for which positive results under IND are known.

Partly based on joint work with Mihir Bellare and Srdjan Krstic.

G. Kyureghyan, INRIA Paris-Rocquencourt and U. Magdeburg

An S-Box of a block cipher is a mapping between vector spaces over the binary field GF(2).If an S-Box has a property characteristic to a linear mapping, it is a proven or potential weakness of the cipher. Hence mappings used as S-Boxes must be as far as possible from linear. However,there are several possibilities to define non-linearity of a mapping. In the talk we introduce several notions of non-linearity. The non-linearity concepts are defined for vector spaces, but some of the classes of non-linear mappings are better studied over finite fields. We present the advantages for endowing the vector space with a field structure. Finally, we present an explicit construction of permutations on finite fields satisfying several properties required in Cryptology.

A. Sasse, UCL

Service providers use CAPTCHAs to stop bots from creating accounts. It is an example of a security solution that makes genuine human users to work to keep the bots out. In the March issue of Scientific American, technology write David Pogue estimates that CAPTCHAs waste 17 years of human effort every single day. A small but growing number of security researchers has made similar arguments about passwords, certificates, and anti-phishing tools - they consume individual time and effort for little or no discernible reduction of risk. In this talk, I will review this research, and then present examples from our recent research on the impact of such mechanisms on productivity in corporate contexts - and show how designing more usable security reduces cost and increases compliance.

N. Gama, Université de Versaille

In 1996, Ajtai proves that finding short vectors in uniformly drawn lattices of the particular class of SIS lattices allows to solve any lattice problems in the worst case. This worst-case to average-case reduction has motivated many provably-secure lattice-based constructions over the class of SIS lattices. Unfortunately, the overhead in the volume and dimension induced by this reduction is quite large, thus resulting in cryptographic schemes with too high dimensions and key sizes, such as 1GB public keys.

In this paper, we propose a revision of worst-case to average-case reduction, and prove the average hardness of the dense modular subset sum problem, as well as the approximate shortest vector problem on generic random lattices. This closely concurs with experimental results. And instead of resorting to the SIS problem as it is usually the case, our investigation leads to obtaining cryptosystems based on random lattices. The other advantage of our work is that, instead of having to rely on trapdoors for SIS lattices, we can use a significantly more simple and well-known Trapgen for random lattices.

Joint work with Malika Izabachene

P. Farshim, TU Darmstadt

In this talk, we propose a new cryptographic primitive called Delegatable Homomorphic Encryption (DHE). This allows a Trusted Authority to control/delegate the evaluation of circuits over encrypted data to untrusted workers/evaluators by issuing tokens. This primitive can be both seen as a public-key counterpart to Verifiable Computation, where input generation and output verification are performed by different entities, or as a generalisation of Fully Homomorphic Encryption enabling control over computations on encrypted data. Our primitive comes with a series of extra features: 1) there is a one-time setup procedure for all circuits; 2) senders do not need to be aware of the functions which will be evaluated on the encrypted data, nor do they need to register keys; 3) tokens are independent of senders and receiver; and 4) receivers are able to verify the correctness of computation given short auxiliary information on the input data and the function, independently of the complexity of the computed circuit. We give a modular construction of such a DHE scheme from three components: Fully Homomorphic Encryption (FHE), Functional Encryption (FE), and a (customised) MAC. As a stepping stone, we first define Verifiable Functional Encryption (VFE), and then show how one can build a secure DHE scheme from a VFE and an FHE scheme. We also show how to build the required VFE from a standard FE together with a MAC scheme. All our results hold in the standard model. Finally, we show how one can build a verifiable computation (VC) scheme generically from a DHE. As a corollary, we get the first VC scheme which remains verifiable even if the attacker can observe verification results. (Joint work with Manuel Barbosa) .

S. Goldwasser, MIT

We consider multiparty computation (MPC) within a setting where a malicious adversary may corrupt some parties and {\it leak} information about the secret state of all honest parties. Leakage queries are chosen adaptively, throughout the protocol execution. We will show how to construct a MPC protocol secure against continual memory leakage, with a (necessary) one-time leak-free preprocessing phase in which the players' inputs are shared. More specifically, we prove that any malicious adversary who corrupts (1-\epsilon) fraction of all parties (for any constant \epsilon > 0) and can continuously leak information about the secret state of honest parties for an unbounded number of executions of the MPC protocol, learns nothing beyond the evaluated function values, using the classical simulation-based security definitions. Our construction is based on a number of new primitives, which can be instantiated under DDH and the existence of FHE. This result is joint with Boyle, Jain, and Kalai. There are two new tools we use to achieve the above result which will be discussed as well.

- A compiler for transforming any algorithm into one that is secure in the presence of an attacker which gets to observe partial information on the internal state of the computation during execution. This result is unconditional and is of relevance both for running cryptographic algorithms in the presence of side-channel attacks, as well as for running non-cryptographic algorithms, such as a proprietary search algorithm or a game, on a cloud server where parts of the execution's internals might be observed. Joint work with G. Rothblum.
- An MPC protocol for computing any efficient function f, without any leak-free stage, with the weaker guarantee that an adversary who corrupts any subset of parties and leaks L bits on the secret states of honest parties learns nothing beyond the function output and L bits of information about the inputs. The construction relies on the linear assumption over bilinear groups. This result is with Kalai, Jain, Garg and Sahai .

M. Albrecht, Paris

We present a formal treatment of cryptographic constructions - commonly known as "Polly Cracker" - based on the hardness of computing remainders modulo an ideal over multivariate polynomial rings. This work is motivated by the observation that the Ideal Membership (IM) problem is one of the most natural candidates to build homomorphic encryption schemes. To this end, we start by formalising and studying the relation between the ideal remainder problem and the problem of computing a Grobner basis.

We show both positive and negative results.

On the negative side, we define a symmetric Polly Cracker encryption scheme and prove that this scheme only achieves bounded CPA security under the hardness of the IM problem. Furthermore, using results from computational commutative algebra we show that a large class of algebraic transformations cannot convert this scheme to a fully secure Polly Cracker-type scheme.

On the positive side, we formalise noisy variants of the ideal membership, ideal remainder, and Grobner basis problems. These problems can be seen as natural generalisations of the LWE problem and the approximate GCD problem over polynomial rings. After formalising and justifying the hardness of the noisy assumptions we show - following the recent progress on homomorphic encryption - that noisy encoding of messages results in a fully IND-CPA secure somewhat homomorphic encryption scheme. Together with a standard symmetric-to- asymmetric transformation for additively homomorphic schemes, we provide a positive answer to the long standing open problem proposed by Barkee et al. (and later also by Gentry) of constructing a secure Polly Cracker-type cryptosystem reducible to the hardness of solving a random system of equations. Indeed, our results go beyond that by also providing a new family of somewhat homomorphic encryption schemes based on new, but natural, hard problems.

K. Lauter, Microsoft

The possibility of outsourcing computation to the cloud offers businesses and individuals substantial cost-savings, flexibility, and availability of compute resources, but potentially sacrifices privacy. Homomorphic encryption can help address this problem by allowing the user to upload encrypted data to the cloud, which the cloud can then operate on without having the secret key. The cloud can return encrypted outputs of computations to the user without ever decrypting the data, thus providing hosting of data and services without compromising privacy. Important applications include electronic medical data and financial applications, as well as private targeted advertising. The catch is the degradation of performance and issues of scalability and flexibility. This talk will survey the trade-offs when using homomorphic encryption, and highlight scenarios and functionality where homomorphic encryption seems to be the most appropriate solution. In recent work, we showed that homomorphic encryption can even be used to enable private versions of some basic machine learning algorithms.

This talk will cover several pieces of joint work with Michael Naehrig, Vinod Vaikuntanathan, and Thore Graepel.

H. Thiebeauld, RFI-Global

Nowadays most of the security-sensitive embedded devices need to successfully pass a security evaluation in order to be issued on the field. The purpose of such evaluation aims at providing a good level of insurance that a payment device, a payment terminal or an ePassport will not be hacked in reasonable conditions. To do so, an accredited third party laboratory is in charge of applying most recent attack techniques to assess a new product. One type of attack, named Side-channel techniques, target specifically cryptographic algorithms running on a (secure)microcontroller with the aim to extract the secret key, even partially. When a flaw is found, the laboratory needs to quantify the risk that the same attack occurs on the field. This information is later on exploited by the risk managers to define specific rules regarding the product deployment.

The talk will provide an overview of the security evaluation context. The objective is to give an insight of the practical way side-channel attacks are conducted on typical algorithms, such as DES, AES or RSA. Moreover the talk with mention the kind of information an evaluator has in hands to perform a valid assessment in a limited time. Finally some explanations will be given to understand how an attack is rated.

J. Bos, Microsoft

The performance of the elliptic curve method (ECM) for integer factorization plays an important role in the security assessment of RSA-based protocols as a cofactorization tool inside the number field sieve. The efficient arithmetic for Edwards curves found an application by speeding up ECM. In this presentation I show techniques based on generating and combining addition chains to optimize Edwards ECM in terms of both performance and memory requirements. This makes our approach very suitable for memory-constrained devices such as graphics processing units (GPU). For commonly used ECM parameters we are able to lower the required memory up to a factor 56 compared to the state-of-the-art Edwards ECM approach. Our ECM implementation on a GTX 580 GPU sets a new throughput record, outperforming the best GPU, CPU and FPGA results reported in literature.

This is joined work with Thorsten Kleinjung.

J. Bos, Microsoft

In this presentation I will give a taxonomy of the best known techniques to realize genus-2 based cryptography. Compared to the standardized genus-1 curves, or elliptic curves, arithmetic on genus-2 curves is typically more involved but allows us to work with moduli of half the size. By studying different modular arithmetic approaches on these curves, we present a range of genus-2 implementations. Our side-channel resistant implementation on the Kummer surface breaks the 120 thousand cycle barrier setting a new software speed record for curve arithmetic compared to all previous genus-1 and genus-2 implementations.

This is joint work with Craig Costello, Huseyin Hisil and Kristin Lauter.

Y. Desmedt, UCL and U. Texas

Today several organisations, including the US Government use clouds to store important data. Guaranteeing at the same time reliability and privacy is a major challenge. The need for privacy is obvious (although often ignored). The need for reliability has been illustrated, for example, when the internet was deliberately disconnected in Egypt (January 2011) and with the accidental destruction of the cell phone network in the to T?hoku area during the March 2011 earthquake. To address the aforementioned concerns, fully homomorphic encryption is often championed. Unfortunately, its state of the art is too slow to allow to use it in any reasonable application. A better alternative is secure multiparty computation.

When not tolerating errors, a concern is the need to use Verifiable Secret Sharing (VSS) extensively. In our approach we avoid the need for each shareholder to have to rerun the full VSS protocol after each local computation.

We first briefly summarise the prior work on Black Box Secure Multiparty Computation over Non-Abelian Groups with a Passive Adversary. We then explain the progress made on the topic and then present a solution to the case the adversary is active.

K. Paterson, RHUL

The Transport Layer Security (TLS) protocol aims to provide confidentiality and integrity of data in transit across untrusted networks. TLS has become the de facto secure protocol of choice for Internet and mobile applications. DTLS is a variant of TLS that is growing in popularity. In this talk, I will present distinguishing and plaintext recovery attacks against TLS and DTLS. The attacks are based on a delicate timing analysis of decryption processing in the two protocols. I will report experimental results demonstrating the feasibility of the attacks in realistic network environments for several different implementations of TLS and DTLS, including the leading OpenSSL implementations. I'll also discuss different countermeasures for the attacks and how they are being adopted by vendors.

Joint work with Nadhem AlFardan.

S. Thomson, RHUL

We provide a framework enabling the construction of IBE schemes that are secure under related- key attacks (RKAs). Specific instantiations of the framework yield RKA-secure IBE schemes for sets of related key derivation functions that are non-linear, thus overcoming a current barrier in RKA security. In particular, we obtain IBE schemes that are RKA secure for sets consisting of all affine functions and all polynomial functions of bounded degree. Based on this we obtain the first constructions of RKA-secure schemes for the same sets for the following primitives: CCA-secure public-key encryption, CCA-secure symmetric encryption and Signatures. All our results are in the standard model and hold under reasonable hardness assumptions.

Joint work with Mihir Bellare and Kenny Paterson

F. Regazzoni and A.G. Bayrak, EPFL

Side-channel attacks attempt to uncover secret information from the physical implementations of cryptographic primitives. The design and implementation of cryptosystems resistant against these attacks is a challenge for both hardware and software designers. In view of this problem, this talk proposes automated methodologies for designing robust systems against power-based side-channel attacks.

We demonstrate the effectiveness of our methodologies by automatically protecting user provided unprotected implementations at the circuit level (introducing a random jitter in hardware accelerators), at the processor level (randomly shuffling the execution order of independent instructions or using secure instruction set extensions), or at the software level (instrumenting the compiler to apply certain protection schemes).

M. Huth, Imperial

Future computer-supported systems will run in a world with many more computing devices than humans. Such systems or their units may, maliciously or unintentionally, not meet expectations that stem from intent of programmers, software baselines, threat analysis, or other sources. And these expectations may have to range over many modalities, e.g. security, privacy, and trustworthiness of device interactions across domains. We suggest that traditional approaches to engineering of secure computer architectures and systems, including those supported by formal verification, will be inadequate for effectively managing security risks faced by such future systems. We here argue that, in particular, a new way of thinking about programming is needed. And we propose a vision of such an approach: programming of payload computation is enriched with an annotation language that compares expectations with observed behaviour and, based on this, generates quantitative trust evidence that forms the basis for enforcing risk management of executing software. We sketch how a judicious combination of tools from language design and quantitative verification may support this way of thinking. And we describe challenges and opportunities that this approach brings.

This is joint work with Jim Huan-Pu Kuo.

A. Becker, EPFL

At Eurocrypt 2010, Howgrave-Graham and Joux described an algorithm for solving hard knapsacks of n elements and density close to 1 in time O(2^{0.337n}0 and memory O(2^{0.256n}). This improved a 30-year old algorithm by Shamir and Schroeppel of running time O(2^{0.5n}) and memory requirement O(2^{0.25n}). In this talk we will present the following: Using the simple observation that the solution to the knapsack problem, a binary vector, can be represented by two overlapping vectors with coefficients in {-1,0,1} , we can obtain a better algorithm of running time O(2^{0.291n}). Furthermore, this technique can be applied to improve information set decoding attacking linear codes such as used in McEliece public key encryption. We can improve recent results on half-distance decoding such as Ball-collision decoding and May-Meurer-Thomae--ISD of asymptotic time complexity O({2^{0.056n}}) and O(2^{0.054n}), respectively. Previous attempts split the solution vector in disjoint parts. We search the solution as decomposition of two overlapping vectors which permits to reduce the running time to O(2^{0.049n}) in the asymptotic case.

The talk is based on the publications: http://eprint.iacr.org/2011/474 and http://eprint.iacr.org/2012/026 a joint work with Jean-Sébastien Coron, Antoine Joux, Alexander Meurer and Alexander May.

F.-X. Standaert, UC Louvain

Evaluating side-channel attacks and countermeasures requires determining the amount of information leaked by a target device. For this purpose, information extraction procedures published so far essentially combine a "leakage model" with a "distinguisher". Fair evaluations ideally require exploiting a perfect leakage model (i.e. exactly corresponding to the true leakage distribution) with a Bayesian distinguisher. But since such perfect models are generally unknown, density estimation techniques have to be used to approximate the leakage distribution. This raises the fundamental problem that all security evaluations are potentially biased by both estimation and assumption errors. Hence, the best that we can hope is to be aware of these errors. In this paper, we provide and implement methodological tools to solve this issue. Namely, we show how sound statistical techniques allow both quantifying the leakage of a chip, and certifying that the amount of information extracted is close to the maximum value that would be obtained with a perfect model.

Joint work with François Durvaux and Nicolas Veyrat-Charvillon.

G. Neven, IBM Zurich

Password-authenticated secret sharing (PASS) schemes, first introduced by Bagherzandi et al. at CCS 2011, allow users to distribute data among several servers so that the data can be recovered using a single human-memorizable password, but no single server (or even no collusion of servers up to a certain size) can mount an off-line dictionary attack on the password or learn anything about the data. We propose a new, universally composable (UC) security definition for the two-server case (2PASS) in the public-key setting that addresses a number of relevant limitations of the previous, non-UC definition. For example, our definition makes no prior assumptions on the distribution of passwords, preserves security when honest users mistype their passwords, and guarantees secure composition with other protocols in spite of the unavoidable non-negligible success rate of online dictionary attacks. We further present a concrete 2PASS protocol and prove that it meets our definition. Given the strong security guarantees, our protocol is surprisingly efficient: in its most efficient instantiation under the DDH assumption in the random-oracle model, it requires fewer than twenty elliptic-curve exponentiations on the user’s device. We achieve our results by careful protocol design and by exclusively focusing on the twoserver public-key setting.

Joint work with Jan Camenisch, Anja Lehmann, and Anna Lysyanskaya.

P. Mohassel, U. Calgary

Private function evaluation (PFE) is a variant of secure multiparty computation wherein the function (circuit) being computed is also considered to be a private input of one of the participants. In this talk, I will first review the existing solutions for PFE in a variety of settings and identify their shortcomings. Then, I will describe our new framework for designing PFE and show how it allows us to obtain new constructions with better asymptotic and practical efficiency. I will end by pointing out a few directions for future research.

J. Steinberger, Tsinghua Univ.

A key-alternating cipher (or "generalized Even-Mansour cipher") can be seen as a theoretical abstraction of AES, whereby one alternates the XOR with a subkey with the application of a fixed permutation. The indistinguishability of a t-round such cipher in an idealized model has been conjectured to reach N^{t/(t+1)} queries for N = 2^n (a known upper bound). There have been a recent spat of results on this conjecture:

Bodganov et al 2011: prove security N^2/3 for t >= 2 [sharp only for t = 2] Steinberger 2012: proves security N^3/4 for t >= 3 [sharp only for t = 3] Lampe and Seurin 2012: prove security N^{t/(t+2)} for even t

Recently with my student Shan Chen we proved the long-sought-for security of N^{t/(t+1)}. Our result uses Patarin's H-coefficient technique. In the talk I will review the H-coefficient technique and give some details on our result. No prerequisites needed!

A. Dent, Qualcomm.

This (non-technical) talk will discuss the difference between security in academia and security in industry based on the observations of a ten-year veteran of the academic world's shift into the world of commercial smartphone security. We'll frame this conversation by looking at the different ways in which security can be thought of as a game, the strategies for winning these games and why some companies/individuals seem to do better than others.

S. Galbraith, U. Auckland.

I will survey results on lattice-based public key signatures and discuss the constraints on achieving short signatures. A new scheme will be presented. Among schemes in the literature whose security follows from standard assumptions, the new scheme achieves the shortest signatures.

M. Rajarajan, City University.

For a security analyst, determining whether a network asset has been compromised by or vulnerable to an attack is an important task. The process of accomplishing this task however is often tedious and involves studying the activities of network hosts and manually piecing together evidence collected from a range of security devices. Although many correlation frameworks exist, many rely on pre-domain knowledge. This presentation will introduce a new framework for observing and capturing host behaviour by analysing IDS logs to detect progressive attacks in complex networks.

C. Orlandi, Aarhus University.

In the setting of secure multiparty computation, a set of parties wish to compute a joint function of their inputs, while preserving properties like privacy, correctness, and independence of inputs. One security property that has typically not been considered in the past relates to the length or size of the parties inputs. This is despite the fact that in many cases the size of a party's input can be confidential. The reason for this omission seems to have been the folklore belief that, as with encryption, it is impossible to carry out non-trivial secure computation while hiding the size of parties' inputs. However some recent results (e.g., Ishai and Paskin at TCC 2007, Ateniese, De Cristofaro and Tsudik at PKC 2011) showed that it is possible to hide the input size of one of the parties for some limited class of functions, including secure two-party set intersection. This suggests that the folklore belief may not be fully accurate.

In this work, we initiate a theoretical study of input-size hiding secure computation, and focus on the two-party case. We present definitions for this task, and deal with the subtleties that arise in the setting where there is no a priori polynomial bound on the parties' input sizes. Our definitional study yields a multitude of classes of input-size hiding computation, depending on whether a single party's input size remains hidden or both parties' input sizes remain hidden, and depending on who receives output and if the output size is hidden from a party in the case that it does not receive output. We prove feasibility and impossibility results for input-size hiding secure two-party computation.

Mark Marson/Elke De Mulder, Cryptography Research Inc.

We describe an attack against nonce leaks in 384-bit ECDSA using an FFT-based attack due to Bleichenbacher. The signatures were computed by a modern smart card. We extracted the low-order bits of each nonce using a template-based power analysis attack against the modular inversion of the nonce. We also developed a BKZ-based method for the range reduction phase of the attack, as it was impractical to collect enough signatures for the collision searches originally used by Bleichenbacher. We confirmed our attack by extracting the entire signing key using a 5-bit nonce leak from 4000 signatures.

Georg Fuchsbauer, IST Austria.

We introduce policy-based signatures (PBS), where a signer can only sign messages conforming to some authority-specified policy. The main requirements are unforgeability and privacy, the latter meaning that signatures not reveal the policy. PBS offer value along two fronts: (1) On the practical side, they allow a corporation to control what messages its employees can sign under the corporate key. (2) On the theoretical side, they unify existing work, capturing others forms of signatures as special cases or allowing them to be easily built. Our work focuses on definitions of PBS, proofs that it is realizable for arbitrary policies, efficient constructions for specific policies, and a few representative applications.

Tore Kasper Frederiksen, U. Aarhus.

One of the main tools to construct secure two-party computation protocols are Yao garbled circuits. Using the cut-and-choose technique, one can get reasonably efficient Yao-based protocols with security against malicious adversaries. At TCC 2009, Nielsen and Orlandi [NO09] suggested to apply cut-and-choose at the gate level, while previously cut-and-choose was applied on the circuit as a whole. This appealing idea allows for a speed up with practical significance (in the order of the logarithm of the size of the circuit) and has become known as the LEGO construction. Unfortunately the construction in [NO09] is based on a specific number-theoretic assumption and requires public-key operations per gate of the circuit.

The main technical contribution of this work is a new XOR-homomorphic commitment scheme based on oblivious transfer, that we use to cope with the problem of connecting the gates in the LEGO construction. Our new protocol has the following advantages:

- It maintains the efficiency of the LEGO cut-and-choose.
- After a number of seed oblivious transfers linear in the security parameter, the construction uses only primitives from Minicrypt (i.e., private-key cryptography) per gate in the circuit (hence the name MiniLEGO).
- On the contrary of original LEGO, MiniLEGO is compatible with all known optimization for Yao garbled gates (row reduction, free-XORs, point-and-permute).

Paweł Morawiecki, Polish Academy of Science.

The AES and SHA-3 competitions are generally viewed as having provided a tremendous boost to the cryptographic research community's understanding of block ciphers and hash functions. Very recently, a new initiative called the CAESAR competition has been proposed. This time it encourages the symmetric crypto community to focus on authenticated encryption schemes (also known as authenticated ciphers). In the talk a new authenticated cipher will be presented and all the interesting points and obstacles we faced during the design process. It is a joint work with K. Gaj, E. Homsirikamol (George Mason University, USA), K. Matusiewicz (Intel, Poland), J. Pieprzyk (Queensland University of Technology, Australia), M. Rogawski (Cadence Design Systems, USA), M. Srebrny (Polish Academy of Science, Poland) and M. Wojcik (University of Bristol, UK.)

Dale Sibborn, Royal Holloway.

Several recent and high-profile incidents give cause to believe that randomness failures of various kinds are endemic in deployed cryptographic systems. In the face of this, it behoves cryptographic researchers to develop methods to immunise -- to the extent that it is possible -- cryptographic schemes against such failures. This paper considers the practically-motivated situation where an adversary is able to force a public key encryption scheme to reuse random values, and functions of those values, in encryption computations involving adversarially chosen public keys and messages. It presents a security model appropriate to this situation, along with variants of this model. It also provides necessary conditions on the set of functions used in order to attain this security notation, and demonstrates that these conditions are also sufficient in the Random Oracle Model. Further standard model constructions achieving weaker security notions are also given, with these constructions having interesting connections to other primitives including: pseudo-random functions that are secure in the related key attack setting; Correlated Input Secure hash functions; and public key encryption schemes that are secure in the auxiliary input setting (this being a special type of leakage resilience).

Eerke Boiten, Uni Kent.

NHS England have a plan to create one big medical database for research and everything else. As computer scientists with an eye towards privacy and security, we should probably be worried about this. This talk will highlight a number of the issues, such as pseudonymisation and re-identification. It's also such a charged issue in the press that we can easily and happily deviate into a discussion of the favourite worries and/or conspiracies put forward by the audience in relation to the scheme. Eerke Boiten is a formerly theoretical computer scientist, working in the School of Computing at Kent and currently director of the University's Interdisciplinary Cyber Security Research Centre, who found a megaphone just when the interface between computing, security, and society became interesting.

Stephan Neumann, T.U. Darmstadt.

Internet voting continues to raise interest both within research and society. To conduct elections over the Internet, a variety of cryptographic building blocks and protocols have been proposed in the literature. While these components provide strong security guarantees from a theoretical perspective such as end-to-end verifiability or long-term secrecy, their practical security is often not sufficiently considered. For instance, the JCJ/Civitas scheme provides coercion-resistance under the assumption that the voting platform is trustworthy and voters behave adequately reasonable in case of coercion. Consequently, the level of practical security might significantly differ from theoretical security based on sound security proofs. In this talk, we review human-centered improvements around the well-known end-to-end verifiable Helios voting system. Additionally, we review the recently invented code-voting based scheme Pretty Understandable Democracy, which has been built following a human-centered design approach. We compare the systems based on their theoretical security models and their practical security model, particularly focusing on the voter. Finally, further research directions in the area of human-centered Internet voting are highlighted.

Tim Jones, U. Bristol.

Various elliptic curve cipher schemes have been developed in recent years. Elliptic curves tend to produce faster, small schemes perfectly suited for slower, smaller computers such as smart cards. Unfortunately, smart cards tend to find themselves in areas subject to various forms of physical leakage. Whilst these schemes are believed to be secure in their theory, the practicalities of implementing them can often cause some potentially crippling security flaws. In this talk, I will outline and discuss some of these flaws and possible countermeasures, including issues relating to generating random points, computing modular square roots, and selecting secure curves.

A. Lysyanskaya, Brown U.

Although anonymous credentials are often cited as motivation for the study of blind signatures, there is no known way to obtain anonymous credentials directly from blind signatures (although indirectly, since blind signatures imply one-way functions and secure computation, they imply anonymous credentials as well).

In this talk, we will define and construction blind signatures _with attributes_. Our new notion is a convenient building block for *light* anonymous credential systems in which an anonymous credential can only be used once. The construction we propose is efficient: it requires just a few exponentiations in a prime-order group in which the decisional Diffie-Hellman problem is hard. Thus, for the first time, we give a provably secure construction (under sequential composition) of anonymous credentials that can work in the elliptic group setting without bilinear pairings and is based on the DDH assumption.

In contrast, prior provably secure constructions were based on the RSA group or on groups with pairings, which made them prohibitively inefficient for mobile devices, RFIDs and smartcards. The only prior construction with comparable efficiency, due to Brands, does not have a proof of security, and in fact, we have shown that there are fundamental limitations to proving it unforgeable.

Based on joint work with Foteini Baldimtsi.

J. Greenwood, Bristol U.

SCADA PLC devices are at the centre of every industrial process today. They switch on and off power supplies, pump oil around the globe, and limit nuclear refinement centrifuges. With the modern trend of internet connectivity, the security of these devices becomes every more paramount. This talk will discuss the basics of SCADA, walk through the assessment of a common PLC, and discuss the impacts of PLC security on Critical National Infrastructure.

E. Kashefi, Edinburgh U.

We present a family of protocols to achieve secure delegated classical computation where the client and the server have both their classical and quantum computing capacity limited. These protocols are implementable with the currently available quantum technology in the lab. We discuss how we could exploit them for construction of hybrid network of classical protocols with quantum gadgets boosting efficiency and security.

Georg Fuchsbauer, IST Austria.

Constrained pseudorandom functions were introduced independently by Boneh and Waters (Asiacrypt'13), Kiayias et al. (CCS'13), and Boyle et al. (PKC'14). Whereas for standard pseudorandom functions (PRF) a key is used to evaluate the PRF on all inputs in the domain, constrained PRFs offer the additional functionality to derive ``constrained'' keys with which the PRF can be evaluated only on a subset of the domain.

The three papers all show that the classical PRF construction by Goldreich, Goldwasser and Micali directly yields a constrained PRF where one can compute constrained keys to evaluate the PRF on all inputs with a given prefix. This constrained PRF has already found many interesting applications. Unfortunately, the existing security proofs only show selective security; full security can only be obtained via complexity leveraging, which loses an exponential factor $2^N$ in security, where $N$ is the input length.

We present a new reduction that only loses a quasipolynomial factor $q^{\log N}$, where $q$ is the number of adversarial queries. For this we develop a novel proof technique which constructs a distinguisher by interleaving simple guessing steps and hybrid arguments a small number of times.

We then investigate another constrained PRF, due to Boneh and Waters, which allows for constrained keys for the more general class of bit-fixing functions. Their security proof also suffers from a $2^N$ loss. We construct a meta-reduction which shows that any ``simple'' reduction that proves full security from a non-interactive hardness assumption must incur an exponential security loss.

Joint work with M. Konstantinov, K. Pietrzak and V. Rao.

Bertram Poettering, RHUL.

Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time.

Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could be coerced to make multiple certifications for the same subject but different objects. In this talk we propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving and gives signers some cryptographic arguments to resist coercion.

Valerio Pastro, CUNY.

We propose an efficient robust secret sharing scheme in a setting where there are multiple adversaries, each having access to a single dishonest player, who do not communicate during the execution of the protocol (but may do so before and/or after the protocol).

Our scheme meets the lower bound for the share size of efficient robust secret sharing schemes, therefore improving on the best known constructions in the standard model by removing a linear dependence on the number of players. For our construction, we introduce a novel procedure that compiles an error correcting code into a new radomized one, with the following two properties: a single local portion of a codeword leaks no information on the encoded message itself, and any set of portions of a codeword reconstructs the message with error probability exponentially low in the set size.

Joint work with Allison Bishop Lewko.

Elena Andreeva, KU Leuven.

The current trend in cryptography is to use a single algorithm for both security goals of confidentiality and integrity, namely an authenticated encryption (AE) scheme. The demand for secure and efficient AE schemes is reflected in the recently announced CAESAR competition (similar to the AES and SHA-3 competitions) for the recommendation of a portfolio of AE schemes. We will start our talk with some of the main theoretical aspects of AE. One of the seminal works providing AE formalism is the paper of Bellare and Namprempre from ASIACRYPT 2000. The paper answers the question: How to combine a secure probabilistic encryption scheme with a secure message authentication scheme (MAC) to construct a secure AE scheme? It is also this work that coins the term *generic composition* as opposed to AE designs from scratch, or dedicated designs, referring to the former approach as the combination of existing encryption and authentication building blocks. While the result has been immensely helpful for the design and analysis of later AE schemes, it has become somewhat outdated in view of more recent advancements on AE. Most existing encryption schemes use a random IV (an external source of randomness) and while Bellare and Namprempre prescribe a way to turn a probabilistic encryption scheme and a MAC into a secure authenticated encryption scheme, the result does not necessarily carry over for AE schemes using an encryption scheme with an externally generated random IV.

Furthermore, for practical purposes the random IV can be further replaced by a nonce IV, which needs to be only a unique non repeating value. Only recently, at EUROCRYPT 2014 Rogaway and Namprempre reexamined the generic composition approach for the particular cases where the encryption scheme underlying the AE scheme is random IV- or nonce IV-based. The non-generic AE composition approach has been formalized first for nonce IV-based AE schemes by Rogaway at FSE 2004. Furthermore, at EUROCRYPT 2006 Rogaway and Shrimpton introduced the notion of deterministic authenticated encryption (DAE). In DAE, an IV input is optional and can therefore take arbitrary values. A secure DAE differs from a secure nonce IV AE scheme in the fact that the DAE confidentiality is possible only up to message repetitions, namely an adversary can detect repetitions of encryptions of identical messages. Unfortunately, the encryption of DAE schemes is not online.

To resolve this issue, at FSE 2012 Fleischmann et al. introduced the notion of authenticated online encryption. The syntax here is the same as DAE and confidentiality holds only up to repetitions of messages with identical common prefixes. The notion of AE has been further extended and generalized in different ways. Tsang et al. gave a syntax and security definitions of AE for streaming data. Bellare and Keelveedhi considered a stronger security model where data may be key-dependent. Boldyreva et al. reformulated AE requirements and properties to handle ciphertext fragmentation and further enhanced the syntax and security definitions so that the verification oracle is allowed to handle multiple failure events.

After examining the theoretical foundation of AE we will proceed with our recent security result dealing with AE settings where plaintexts are released before the completion of the verification step in decryption (release of unverified plaintext RUP). We will introduce the notions of plaintext awareness in the symmetric-key setting (PA1 and PA2) and RUP integrity of the ciphertexts. These security notions are then used to make a classification of symmetric-key schemes in the RUP setting. We analyze existing authenticated encryption schemes in this setting, and provide solutions to fix insecure schemes. We will conclude this talk with security results for some popular AE designs and submissions from the CAESAR cryptographic competition.

Sarah Meiklejohn, UCL.

In this talk, we examine q-type assumptions, which are stronger and require larger parameter sizes than their static counterparts. We show that in certain groups, many classes of q-type assumptions are in fact implied by subgroup hiding (a well-established, static assumption). Our main tool in this endeavor is the dual-system technique, as introduced by Waters in 2009. As a case study, we first show that in composite-order groups, we can prove the security of the Dodis-Yampolskiy PRF based solely on subgroup hiding and allow for a domain of arbitrary size (the original proof only allowed a polynomially-sized domain). We then turn our attention to classes of q-type assumptions and show that they are implied -- when instantiated in appropriate groups -- solely by subgroup hiding. Concretely, our result implies that every construction relying on such assumptions for security (e.g., Boneh-Boyen signatures) can, when instantiated in appropriate composite-order bilinear groups, be proved secure under subgroup hiding instead.

Graham Cormode, Warwick.

When handling large quantities of data, it is often necessary to outsource the computational effort to a third party: this captures current efforts in cloud computing, but also scenarios within trusted computing systems and inter-organizational data sharing. When the third party is not fully trusted, it is important to give assurance that the computation has been performed correctly. This talk presents some recent results in designing new protocols for verifying computations which are streaming in nature: the verifier (data owner) needs only a single pass over the input, storing a sublinear amount of information, and follows a simple protocol with a prover (service provider) that takes a small number of rounds. A dishonest prover fools the verifier with only vanishingly small probability, while an honest prover's answer is always accepted. Starting from basic aggregations, interactive proof techniques allow a quite general class of computations to be verified, leading to practical implementations.

R. Player, RHUL.

The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. We will discuss hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and propose a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

K. Paterson, RHUL.

Despite recent high-profile attacks on the RC4 algorithm in TLS, its usage is still running at about 30% of all TLS traffic. This is attributable to the lack of practicality of the existing attacks, the desire to support legacy implementations, and resistance to change. We provide new attacks against RC4 in TLS that are focussed on recovering user passwords, still the pre-eminent means of user authentication on the Web today. Our attacks enhance the statistical techniques used in the existing attacks and exploit specific features of the password setting to produce attacks that are much closer to being practical. We report on extensive simulations that illustrate this. We also report on two "proof of concept" implementations of the attacks for specific application layer protocols, namely BasicAuth and IMAP. Our work validates the truism that attacks only get better with time: we obtain good success rates in recovering user passwords with around 2^{26} encryptions, whereas the previous generation of attacks required 2^{34} encryptions to recover an HTTP session cookie.

Joint work with Christina Garman and Thyla van der Merwe.

URL: http://www.isg.rhul.ac.uk/tls/RC4mustdie.html.

C. Cid, RHUL.

In this talk, we investigate the incentives behind investments by competing companies in the discovery of their security vulnerabilities and sharing of their findings. Specifically, we consider a game between competing firms that utilise a common platform in their systems. The game consists of two stages: firms must decide how much to invest in researching vulnerabilities, and thereafter, how much of their findings to share with competitors. We fully characterise the Perfect Bayesian Equilibria (PBE) of this game, and translate them into realistic insights about firms’ strategies. Further, we develop a monetary-free sharing mechanism that encourages both investment and sharing, a missing feature when sharing is arbitrary or opportunistic. This is achieved via a light-handed mediator. Our results give an insight of the origins of inefficiency and paves the path towards more efficient sharing of cyber-intelligence among competing entities. The talk will contain a brief and gentle introduction to game theoretic modelling in information security, before presentation of the main results. This research is a joint work with Arman Khouzani and Viet Pham, and was presented at GameSec 2014.

Dan Shepherd, CESG.

SOLILOQUY was a cryptosystem for quantum-safe key encapsulation, developed 'quietly' by CESG in 2007, bearing a striking resemblance to the later-published (though independently developed) Smart-Vercauteren Fully Homomorphic cryptosystem. Our research led to the development of a series of ideas for using quantum algorithmics to solve the Principal Ideal Problem, to find the units of a high-degree number field, to generalise Shor's algorithm to the continuous-group case, and to develop the notion of a quantum invariant--or 'fingerprint'--for identifying lattices. These developments in turn led us to abandon further development of SOLILOQUY as a cryptosystem for government use. No sooner had we decided that these quantum algorithm ideas ought to be put into the public domain, but very similar ideas appeared independently, in a seminal paper from Eisentraeger, Hallgren, Kitaev, and Song. Much debate has since ensued as to the likelihood that lattice-based cryptosystems will be able stand the test of time. In this talk, I will review a little of the history of SOLILOQUY, describing some of the things we learned along the way about cyclotomic ideals and cyclic lattices. I'll show why it makes sense to regard SOLILOQUY as engendering a very atypical lattice problem, and show what goes wrong when the same cryptanalysis is applied to the bicyclic lattice of a Ring-LWE cryptosystem.

This seminar is open to everyone, but it is part of the Heilbronn Quantum Algorithms meeting. If you want to attend more talks than just this one, the organizers ask that you register via the website . Registration is free, and is simply to ensure that enough catering is provided.

Tancrede Lepoint, Cryptoexperts.

Since the candidate construction of multilinear map by Garg, Gentry and Halevi, this primitive has led to a bounty of exciting cryptographic applications, including indistinguishability obfuscation. However current instantiations rely on an heuristic security analysis. The zero testing procedure, at the heart of the current mutilinear maps candidates, has recently been used to mount powerful attacks. Numerous hardness assumptions and constructions from the literature are therefore broken. In this talk, we will review the blueprint construction of current multilinear map candidates, present a survey of recent attacks, some recent fixes, some other attacks on these fixes, and finally consequences and limitations thereof.

Christian Cachin, IBM.

In the cloud computing model clients outsource computation and data storage to remote servers. This has led to prominent concerns about the privacy of the data and computation placed outside the control of the clients. On the other hand, the integrity of the responses from the remote servers has been addressed in-depth only recently. Violations of correctness are potentially more dangerous, however, in the sense that the safety of a service is in danger and that the clients rely on the responses. Incidental computation errors as well as deliberate and sophisticated manipulations on the server side are nearly impossible to discover with today's technology. Over the last few years, there has been rising interest in technology to verify the results of a remote computation and to check the consistency of responses from a cloud service.

This talk contains first a brief survey of cryptographic approaches to verify remote computation from a single client's perspective. It then addresses how multiple clients that interact by writing to and reading from a stateful remote service may check the responses from the service for data integrity and consistency. Without synchronization nor communication among the clients, the service may mount a replay attack and answer with responses based on outdated state. In this context, recently developed protocols ensure atomic operations to all clients when the service is correct and so-called fork-linearizable semantics when the service is faulty. Fork-linearizability makes it very easy for the clients to detect violations of integrity and consistency by the service; specifically, it means that all clients which observe each other's operations are consistent, in the sense that their own operations, plus those operations whose effects they see, have occurred consistently.

Manos Panaousis, Brighton.

One of the single largest concerns facing organizations today is how to protect themselves from cyber attacks whose prominence impose the need for organisations to prioritize their cyber security concerns with respect to their perceived threats. We are investigating: How do we make better decisions when it comes to spend a limited budget to acquire and/or implement security controls ? Specifically we are developing new approaches to cybersecurity investment decision-support based on game theory.

Relevant Papers:

Andriana Gkaniatsou, Edinburgh.

Smart-Cards are considered as one of the most secure, trusted and tamper-resistant devices for performing cryptographic operations. The commonly used PKCS#11 standard defines the API for cryptographic devices such as smart-cards. Though there has been work on formally verifying the correctness of the implementation of PKCS#11 in the API level, little or none attention has been paid on the low-level protocols that implement it. We will present REPROVE the first automated tool that reverse-engineers the low-level communication between a smart-card and a reader, deduces the card's functionalities and maps that communication to PKCS#11 functions. REPROVE is implementation practice independent and does not require access to the card nor to its API.

Sarah Meiklejohn, UCL.

Current cryptocurrencies, starting with Bitcoin, build a decentralized blockchain-based transaction ledger, maintained through proofs-of-work that also serve to generate a monetary supply. Such decentralization has benefits, such as independence from national political control, but also significant limitations in terms of computational costs and scalability. We introduce RSCoin, a cryptocurrency framework in which central banks maintain complete control over the monetary supply, but rely on a distributed set of authorities, or mintettes, to prevent double-spending. While monetary policy is centralized, RSCoin still provides strong transparency and auditability guarantees. We demonstrate, both theoretically and experimentally, the benefits of a modest degree of centralization, such as the elimination of wasteful hashing and a scalable system for avoiding double-spending attacks. (Joint work with George Danezis.)

Razvan Barbulescu, CNRS, Univ Paris 6 and Univ Paris 7.

The security of pairings-based cryptography relies on the difficulty of two problems: computing discrete logarithms over elliptic curves and, respectively, finite fields GF(p^n) when n is a small integer larger than 1. The real-life difficulty of the latter problem was tested in 2006 by a record in a field GF(p^3) and in 2014 and 2015 by new records in GF(p^2), GF(p^3) and GF(p^4). We will present the new methods of polynomial selection which allowed to obtain these records. Then we discuss the difficulty of DLP in GF(p^6) and GF(p^12) when p has a special form (SNFS) for which two theoretical algorithms were presented recently.

Matteo Campanelli, CUNY.

With the rise of cloud computing more and more businesses and individuals lease computing resources from a service rather than maintain their own computing infrastructure (e.g. Dropbox, Amazon AWS). How do we check that the results of outsourced computation on these remotely stored data are correct and how can we perform such tests very efficiently? Could we push this further and perform (almost) no check at all and still obtain a trusted result from a computation? For example, can we exploit the economic incentives of the computation providers and make sure they are economically damaged if they provide a lesser quality (i.e. incorrect) computation? The answer turns out to be yes. Within this framework, I will give an overview of a basic tool, Rational Proofs, the economic equivalent of the more traditional Interactive Proofs and some of the challenges in achieving them for generalpurpose computation.

Gergey Bana, ENS.

The computationally complete symbolic attacker is a technique that was created as a way to find all possible attacks a probabilistic polynomial time adversary can carry out on a protocol, using symbolic methods. In this talk, we first briefly review the idea of verifying complexity-theoretic security guarantees with symbolic techniques, and mention attempts of various research groups to achieve this goal. We then present the elements of our computationally complete symbolic attacker based on indistinguishability, and show how convenient it is to formalize in this framework standard complexity-theoretic hardness assumptions and cryptographic security notions such as the DDH assumption, CPA, CCA security, or unforgeability. Finally we indicate what proofs we have carried out so far for anonymity, real-or-random secrecy, agreement and authentication, and present some attacks we detected with this technique.

Carla Rafols Salvador, RU Bochum.

I will review the algebraic framework introduced in joint work with Escala, Kiltz, Herold and Villar at Crypto'13 to analyze Diffie-Hellman like Decisional Assumptions which allows to argue about security and applications by considering only algebraic properties.

Our D_{ell,k}-MDDH assumption states that it is hard to decide whether a vector in G^{ell} is linearly dependent
of the columns of some matrix in G^{ell x k} sampled according to distribution D_{ell,k}.
It covers known assumptions such as DDH, 2-Lin (linear assumption), and k-Lin (the k-linear assumption).
I will argue about the benefits of considering this algebraic viewpoint for decisional assumptions.
Then I will move to present recent results with Morillo and Villar (eprint 2015), where we describe a natural computational analogue of the
D_{ell,k}-MDDH Assumption, the Kernel MDDH Assumption, which states that it is hard to find a vector in the Kernel of matrix A (given in the exponent), when A is sampled from D_{ell,k}. Such an assumption generalizes and describes in a unified way several assumptions previously appeared in the cryptographic literature (like the Simultaneous (Double) Pairing Assumption) and which have been relevant in the context of structure preserving cryptography.

Finally, I will illustrate the usefulness of such assumptions by discussing how they allow to construct constant-size quasi-adaptive proofs of membership in linear spaces (following the work of Kiltz and Wee, Eurocrypt 2015).

Carsten Baum, U. Aarhus.

We propose a new zero-knowledge protocol applicable to ad ditively homomorphic functions that map integer vectors to an Abelian group. The protocol demonstrates knowledge of a short preimage and achieves amortized efficiency comparable to the approach of Cramer and Damgaard from Crypto 2010, but gives a much tighter bound on what we can extract from a dishonest prover. Towards achieving this result, we develop an analysis for bins-and-balls games that might be of in- dependent interest. We also provide a general analysis of rewinding of a cut-and-choose protocol as well as a method to use Lyubachevsky's rejection sampling technique efficiently in an interactive protocol when many proofs are given simultaneously. Our new protocol yields improved proofs of plaintext knowledge for (ring-)LWE based cryptosystems, where such general techniques were not known before. Moreover, they can be extended to prove preimages of homomorphic hash functions as well.

E. Soria-Vazquez, U. Bristol.

Over the last years, efficiency improvements in secure multiparty computation (with more than two parties) have been primarily focused on the secret-sharing paradigm, with impressive results like the SPDZ protocol (Damgard et al. CRYPTO 2012). In contrast, the garbled-circuits approach, which leads to constant-round protocols (thus more suitable for slow networks) has received much less attention. In this talk, I will introduce our recent protocol improvements on making the constant-round BMR protocol secure against malicious adversaries and concretely efficient using Somewhat Homomorphic Encryption.

Husen Wang, U. Luxembourg.

We introduce new methods to evaluate integer polynomials with GSW FHE,
which has much slower noise growth and per integer multiplication cost O((log k/k)^{4.7454}/n) times the original GSW, where k is the input plaintext width, n is the LWE dimention parameter. Basically we reduce the integer multiplication noise by performing the evaluation between two kinds of ciphertexts, one in Z_{q} and another in F_{2}^{[log q] }. The conversion between two ciphertexts can be achieved by the integer bootstrapping. We also propose to solve the ciphertext expansion problem by symmetric encryption with stream ciphers.

Dragos Rotaru, U. Bristol.

We discuss the design of symmetric primitives, in particular Pseudo-Random Functions (PRFs) which are suitable for use in a secret-sharing basedMPC system. We consider three different PRFs: the Naor-Reingold PRF, a PRF based on the Legendre symbol, and a specialized block cipher design called MiMC. We present protocols for implementing these PRFs within a secret-sharing based MPC system, and discuss possible applications. We then compare the performance of our protocols. Depending on the application, different PRFs may offer different optimizations and advantages over the classic AES benchmark. Thus, we cannot conclude that there is one optimal PRF to be used in all situations.

Irene Giacomelli, U. Aarhus.

In this work we describe the ZKBoo protocol, a new proposal for practically efficient zero-knowledge arguments especially tailored for Boolean circuits and report on a proof-of-concept implementation. As an highlight, we can generate (resp. verify) a non-interactive proof for the SHA-1 circuit in approximately 13ms (resp. 5ms), with a proof size of 444KB. Our techniques are based on the "MPC-in-the-head" approach to zero-knowledge of Ishai et al. (IKOS), which has been successfully used to achieve significant asymptotic improvements. Our contributions include:

- A thorough analysis of the different variants of IKOS, which highlights their pros and cons for practically relevant soundness parameters;
- A generalization and simplification of their approach, which leads to faster sigma-protocols for statements of the form "I know x such that y=f(x)" (where f is a circuit and y a public value);
- A case study, where we provide explicit protocols, implementations and benchmarking of zero-knowledge protocols for the SHA-1 and SHA-256 circuits.

Luke Garratt, U. Oxford.

Signal is a new messaging protocol with end-to-end encryption, used by over a billion people through its implementations in WhatsApp, Facebook Messenger and Google Allo. Despite this massive uptake, Signal has seen little security analysis; challenges include the complex, stateful design involving over ten different types of key, the lack of documentation or specification, and the novelty of some of its claimed properties.

We performed a formal security analysis of Signal, proving it secure in a game-based model. This required us first to figure out what Signal *is*, which meant working backwards to a specification from the open-source implementation. We also had to work out which security properties we think it was intended to achieve, which include a form of post-compromise security (PCS). PCS is a recently-formalised property encoding the inability of an attacker to impersonate someone even *after* stealing their key. There is much more to be done, however, and I'll talk about some of the interesting research questions that are still left open.

Marco Martinoli, U. Bristol.

Glitches represent a great danger for hardware implementations of cryptographic schemes. Their intrinsic random nature makes them difficult to tackle and their occurrence threatens side-channel protections. Although countermeasures aiming at structurally solving the problem already exist, they usually require some effort to be applied or introduce non-negligible overhead in the design. Our work addresses the gap between such countermeasures and the na{\"i}ve implementation of schemes being vulnerable in the presence of glitches. Our contribution is twofold: (1) we expand the mathematical framework proposed by Brzozowski and {\'E}sik (FMSD 2003) by meaningfully adding the notion of information leakage, (2) thanks to which we define a formal methodology for the analysis of vulnerabilities in combinatorial circuits when glitches are taken into account.

Cecile Pierrot, UPMC.

Trustworthy generation of public random numbers is necessary for the security of many cryptographic applications. It was suggested to use the inherent unpredictability of blockchains as a source of public randomness. Entropy from the Bitcoin blockchain in particular has been used in lotteries and has been suggested for a number of other applications ranging from smart contracts to election auditing. In this talk, we analyse this idea and show how an adversary could manipulate these random numbers, even with limited computational power and financial budget.

A short introduction to public random number generation will be provided, as well as a quick refresher on Bitcoin, so that you need almost no background to follow this talk.

It is a joint work with Benjamin Wesolowski, EPFL.

Awais Rashid, U. Lancaster.

Donald Rumsfeld's three permutations of knowns and unknowns are often quoted in the context of cyber security. We have the Known Knowns: the well-understood attacks such as SQL injection; the Known Unknowns, e.g., the theoretical and practical limitations of certain protocols; and the Unknown Unknowns, i.e. zero days. In this talk, I will talk about a fourth variation, that of Unknown Knowns. These represent the tacit knowledge often implicit within or across a variety of security incidents. This knowledge is “Unknown” in that it is not immediately captured in widely recommended models such as the Top 20 Critical Security Controls. Yet it is “Known” in that it is tacit in existing security breaches. I will discuss how an inter-disciplinary methodology enables discovery of such Unknown Knowns in order to identify gaps in existing security models and plug such gaps. I will use the Top 20 Critical Security Controls as a case study to demonstrate the approach.

Antoine Joux, UPMC.

In this talk, we will review some recent advances concerning combinatorial search of small solutions of crypto related problems such as subset-sum, decoding of linear codes and short vector problems. These advances rely on some counter-intuitive approaches and also require some heuristic assumptions. However, they achieve surprising improvements on the previous (long standing) best know methods.

Chris Carr, NTNU.

I will start with an overview of the Bitcoin system, from the original designs to where it is today, including explaining the work and motivation of some of the more interesting competitors, such as Ethereum, Stellar, Zcash, Ripple and Hyperledger Fabric. We will look at competing consensus mechanisms, specifically looking at alternative proposals such as GHOST, as well as other (potential) alternatives to the 'blockchain' model. We will also examine some of the various challenges that are facing decentralised cryptocurrencies, such as the waste of computational resources, and the potential solutions to that issue. Additionally, we will examine various schemes that aim to leverage this ongoing work to gain other benefits, such as creating a form of public key infrastructure. I will conclude by examining some of the open questions in this area, and talking about my current (ongoing) research.

Carsten Baum, Bar-Ilan University.

We present an additively homomorphic commitment scheme with hardness based on the Ring-SIS problem. Our construction is statistically hiding as well as computationally binding and allows to commit to a vector of ring elements at once. We show how to instantiate efficient zero-knowledge protocols that can be used to prove a number of relations among these commitments, and apply these in the context of lattice-based threshold cryptosystems: we give a generic transformation that can be used with certain (Ring-)LWE-based encryption schemes to make their algorithms actively secure. We show how this transformation can be used to implement distributed decryption with malicious security as well as maliciously secure threshold key generation in an efficient way.