ńņš. 2 |

A (p, 0, t)-forger is called a passive forger, and a forger that is not passive

is called active when needed. It is important to realize that the forgers thus

deļ¬ned are successful without regard to the quality or meaningfulness of the

forged message. To emphasize this limiting aspect, such forgers are called

existential forgers. A selective forger, by contrast, lacks this limitation and

is formally deļ¬ned as follows.

Definition II.5 (Selective Forger). Let U be a probabilistic algorithm, with

no input except randomness, and output of a message. A selective forger is a

forger F with the following diļ¬erences. The input of a public key also includes

a message. The selective forgery game for a selective forger F of signature

scheme (K, G, V ), with message selection oracle U , is the forgery game with

the following diļ¬erences. In Round 0, U is called to generate a message m0 ,

which is given as input to F . The forger wins the game in Round i, only if

m0 = mi+1 is satisļ¬ed. A selective forger F is a (p, Q, t, U )-forger of signature

scheme (K, G, V ).

A selective forger can forge any message from a certain class of messages.

A selective forger is generally probably much more harmful than an existential

forger, depending on the distribution of messages given by U . In particular,

if U generates meaningful messages, then the selective forger can forge any

meaningful message it wants.

Generally, with p and t the same, a passive forger is more harmful than

an active one, and a selective forger is more harmful than an existential one.

Generally, passiveness and selectiveness are qualitative attributes of forgers,

and their importance depends on the usage of the signatures.

Definition II.6 (Signature Security). A signature scheme is (p, Q, t)-secure

against existential forgery if there exists no (p, Q, t)-forger.

A signature scheme is (p, Q, t, U )-secure against selective forgery if there

exists no (p, Q, t, U )-selective-forger.

II.2.2. Necessary Conditions. The conditions in Table II.1 on the com-

ponents of ECDSA can be shown to be necessary for the security of ECDSA

because otherwise forgery would be possible. These conditions are deļ¬ned

below together with some of the reductions to attacks that prove their neces-

sity.

Intractable Semi-Logarithm : For a conversion function f and group

G , a semi-logarithm of a group element P to the base G is a pair of integers

II.2. DEFINITIONS AND CONDITIONS 25

Table II.1. Necessary Conditions with Associated Forgeries

Component Condition Forgery

Group (and Conver- Intractable Discrete Loga- Passive Selective

sion Function) rithm

Intractable Semi-Logarithm Passive Selective

Conversion Function Almost Bijective Passive Selective

Random Number Arithmetically Unbiased Active Selective

Generator

Range Checking Check r = 0 Passive Selective

Hash Function Rarely Zero Passive Selective

Zero Resistant Passive Existential

1st-Preimage Resistant Passive Existential

2nd-Preimage Resistant Active Selective

Collision Resistant Active Existential

(t, u) such that

t = f ([uā’1 ](G + [t]P )).

Finding semi-logarithms needs to be intractable or else forgeries can be found

by setting P = [H(m)ā’1 ]Y , where Y is the public key, for then (t, H(m)u) is

a forgery of message m. The resulting forgery is both passive and selective,

which is the most severe type. Therefore, the intractability of the semi-

logarithm problem is a necessary condition for the security of ECDSA.

A semi-logarithm may be regarded as an ECDSA signature of some mes-

sage whose hash is one (without actually supplying the message). Thus, at

ļ¬rst glance, the semi-logarithm might not seem to be signiļ¬cantly diļ¬erent

than the forging of a ECDSA signature. But semi-logarithms do not depend

on a hash function, unlike ECDSA. Therefore, the semi-logarithm problem is

formally diļ¬erent from the problem of forging of a ECDSA signature. One

reason for considering the semi-logarithm problem is to isolate the role of the

hash function and the group in analyzing the security of the ECDSA.

Intractable Discrete Logarithm : For a group G , the (discrete) loga-

rithm of a group element P to the base G is the integer x such that P = [x]G.

Finding the discrete logarithm of P allows one to ļ¬nd its semi-logarithm, via

(t, u) = f ([k]G), k ā’1 (1 + td) ,

therefore allowing the forging of ECDSA signatures. The forger is both pas-

sive and selective. Indeed, this forger recovers the elliptic curve private key,

which might potentially result in yet greater damage than mere forgery, if,

say, the key is used for other purposes.

26 II. ON THE PROVABLE SECURITY OF ECDSA

Almost-Bijective Conversion Function : The conversion function f is

Ī±-clustered if some element tā— of its range has a large preimage of size at

least Ī± times the domain size. If f is not Ī±-clustered, then it is said to be

almost bijective of strength 1/Ī±. An Ī±-clustered conversion function means

that random (tā— , u) are semi-logarithms with probability at least Ī±. Thus, an

average of about 1/Ī± tries are needed to obtain a semi-logarithm.

Unguessable and Arithmetically Unbiased Private Key Generation :

Clearly, if the generator for the static private key x is guessable, in the sense

that an adversary can guess its values fairly easily, then passive selective

forgery is possible. Guessability, sometimes called min-entropy, is measured

by the maximum probability of any value of x. If the ephemeral private key

k is guessable, then active selective forgery is possible, since the private key

x is determined from k and a signature (r, s) by the formula x = rā’1 (ks ā’

H(m)) (mod q).

Furthermore, a series of attacks has been published that show if the ran-

dom number generator used for k exhibits certain types of bias, the private

key x can be obtained. If k ever repeats for diļ¬erent messages m and m , then

the private key may be solved from the two respective signatures (r, s) and

(r , s ) by x = (se ā’s e)/(s rā’sr ) (mod q). Bellare, Goldwasser and Miccian-

cio [21] showed that if a linear congruential random number generator were

used for k, then the private key could also be found; Smart and Howgrave-

Graham [174] and Nguyen and Shparlinski [261, 262] both showed that if

bits of k are known, due to partial key exposure, then x can be found using

lattice theory; and Bleichenbacher [31] showed that if k is biased towards a

certain interval, then x can be recovered with a larger but feasible amount of

work. Such biases result in active selective forgery, because the forger uses a

signing oracle to ļ¬nd the private key and can then sign any message it wants.

Properly Implemented Range Checking : If an implementation does

not check that r = 0, then the following forgery is possible. The forger needs

to select the EC domain parameters in such a way that G = [t]Z, where Z is

a group element satisfying f (Z) = 0 and t ā Z. For the ECDSA conversion

function, such points Z, if they exist, can be found as follows. Let x have

the binary representation of qu for some integer u and try to solve for the

appropriate y such that (x, y) lies on the curve (here x is not to be confused

with the private key). Repeat until a point is found or until all legal values

of x are exhausted. Most of the NIST recommended curves have such points

Z. The forged signature is (0, tā’1 H(m)).

This forgery is the severest kind: passive selective. Two limitations mit-

igate its severity, however. First, an implementation error is needed. Hence,

non-repudiation is not totally defeated because a trusted third party can use

a correct implementation to resolve the signature validity. Accordingly, the

II.2. DEFINITIONS AND CONDITIONS 27

owner of the key Y should never be held liable for such signatures. The sec-

ond limitation is that the forgery is a domain parameter attack. Usually, a

trusted third-party authority generates and distributes domain parameters,

including G. The attack presumes a corrupt authority. Note that a veriļ¬-

ably random generation of G, which the revision of [ANSI X9.62] will allow,

prevents this attack without relying on the implementer to check that r = 0.

Rarely Zero Hash : If the eļ¬ective hash function, which is the raw hash

truncated and reduced modulo q, has probability p of equalling 0, then passive

selective forgery is possible, as follows. The forger chooses signature (r, s) =

(f ([t]Y ), tā’1 r), for some t ā Z. If the selected message is a zero of the hash,

which happens with probability p, then the forged signature is valid because

f ([sā’1 ]([H(m)]G + [r]Y )) = f ([trā’1 ]([0]G + [r]Y )) = f ([t]Y ) = r.

This and other conditions on the hash function refer to the eļ¬ective hash

function. This qualiļ¬cation is important for the security analysis because the

reduction modulo q might cause the condition to fail if q was chosen by an

adversary. If the adversary chose the elliptic curve domain parameters, then

it is possible that q was chosen as the output, or the diļ¬erence between two

outputs, of the unreduced hash function, which would permit the adversary

to ļ¬nd a zero or a collision in the eļ¬ective (reduced) hash function.

Notice that clustering at values other than zero does not necessarily lead to

a passive existential forgery. It can lead to other kind of attacks, as outlined

below, because clustering at certain values can lead to weakened second-

preimage resistance of the hash function.

Zero-Resistant Hash : A zero ļ¬nder of a hash function is a probabilistic

algorithm that ļ¬nds a message m such that H(m) = 0. A hash function

is zero-resistant if no zero ļ¬nder exists. A passive existential forger can be

constructed from a zero ļ¬nder in a similar manner to above. The forger

chooses signature (r, s) = (f ([t]Y ), tā’1 r) and, using the zero ļ¬nder, ļ¬nds a

message m such that H(m) = 0. Then (r, s) is a valid signature on m. Note

that the forger is only existential because the forger has no control on the m

found by the zero ļ¬nder.

A zero-resistant hash function is clearly rarely zero. The converse is false:

a rarely zero hash function can fail to be zero-resistant. Note that this strict

separation of the properties is also reļ¬‚ected in the types of forgery they re-

late to, so therefore it is important to consider both properties in a security

analysis.

A special case of this attack was ļ¬rst described by Vaudenay [331] as a

domain parameter attack on DSA, where the zero is found by choosing q for

m such that H(m) ā” 0 (mod q).

28 II. ON THE PROVABLE SECURITY OF ECDSA

First-Preimage Resistant (One-Way) Hash : An inverter of a hash

function is a probabilistic algorithm that, if given a random hash value e,

ļ¬nds a message m such that H(m) = e. An inverter can be used to build a

passive existential forger using a technique similar to that for a zero ļ¬nder.

The forged signature is (r, s) = (f ([g]G+[y]Y ), ry ā’1 ), and the forged message

m is a preimage of e = gs. The forger is existential, not selective, because

the forger has no control on the m found by the inverter.

If the hash function does not have an inverter, it is preimage-resistant.

Such a hash is also known as a one-way hash function. We have just shown

that for the signature scheme to be secure, the eļ¬ective hash must be one-way.

Preimage and zero-resistance are essentially independent properties of

hashes. In particular, both properties are necessary to resist passive exis-

tential forgery.

Second-Preimage Resistant Hash : A second-preimage ļ¬nder of a hash

function is a probabilistic algorithm that, if given a random message m from

a distribution U , a second message m is found such that H(m ) = H(m). A

second-preimage ļ¬nder can be used to build an active selective forger. The

forger obtains a signature (r, s) of m from the signing oracle and then outputs

(r, s) as a forgery of m. The forger is selective because it can forge a challenge

message m, and it is active because it requires access to a signing oracle.

Note that a second-preimage ļ¬nder can be constructed from a hash-

inverter provided that the distribution U has an image distribution H(U )

that covers most of the range of H with suļ¬ciently high probability. This is

implied by results noted by Stinson [321]. Therefore, under these conditions,

a second-preimage ļ¬nder also implies the existence of passive selective forger.

Collision-Resistant (One-Way) Hash : A collision ļ¬nder of a hash

function is a probabilistic algorithm that ļ¬nds two messages m and m such

that H(m ) = H(m). A collision ļ¬nder results in an active existential forger

as follows. The forger obtains a signature (r, s) of m from the signing oracle

and then outputs (r, s) as a forgery of m, where m and m have been obtained

from the collision ļ¬nder. This forger is existential because it has no control

over the messages m and m found by the collision ļ¬nder, and it is active

because it queries a signing oracle.

Note that a collision ļ¬nder can be constructed from a second-preimage

ļ¬nder, provided that the distribution U is easy to sample. Also, generally, a

collision ļ¬nder can be constructed from a zero ļ¬nder, unless the zero ļ¬nder

always outputs one particular zero of the hash function.

II.2.3. Extra Conditions and Models for Suļ¬ciency. The conditions

and models discussed below are used in the hypotheses of various provable

security results for ECDSA. These conditions and models are not known to

be necessary for the security of ECDSA. As such, they represent the other

II.2. DEFINITIONS AND CONDITIONS 29

end of the gap in what is known about the security of ECDSA. Unlike for the

necessary conditions discussed above, it is plausible to argue that these extra

conditions and models are so strong that the security results have little value.

Indeed, the ideal would be to not use any conditions or models other than the

known necessary conditions. For each condition or model below, one could

try to prove the condition necessary, or to ļ¬nd a better proof that relies on

strictly weaker conditions or models.

Almost-Invertible Conversion Function : A conversion function f is

almost-invertible if it has an almost inverse g, which is an eļ¬cient prob-

abilistic algorithm that behaves like a inverse function to f in the follow-

ing sense. The algorithm g takes input z āR Z/qZ and generates output

P ā G āŖ {Invalid} such that:

1

1. One has P = Invalid with probability at least 10 as assessed over ran-

dom choices made for z and g.

2. If P = Invalid, then f (P ) = z.

3. If independently random inputs z āR Z/qZ are repeatedly given to g

until the output P = Invalid, the resulting probability distribution of

such P is indistinguishable from the distribution of random P āR G .

In other words, g will successfully ļ¬nd a preimage of its input at least one time

in ten, and, furthermore, the resulting preimage has no (computationally)

detectable bias.

Clearly almost-invertibility implies almost-bijectivity. In particular, it is

a strict strengthening of a necessary condition. When used as part of the

hypothesis of a provable security result, it implies this particular necessary

condition, almost-bijectivity.

Adaptive Semi-Logarithm Resistance : For a conversion function f ,

group G , and e ā Z/qZ with e = 1, an e-shifted semi-logarithm of a group

element P to the base G is a pair of integers (t, u) such that t = f ([uā’1 ]([e]G+

[t]P )). The adaptive semi-logarithm problem is to ļ¬nd a semi-logarithm of a

point P to the base G given access to an oracle for e-shifted semi-logarithms.

Adaptive semi-logarithm resistance means that the adaptive semi-logarithm

problem is intractable.

The adaptive semi-logarithm resembles an active forgery of ECDSA, ex-

cept that no hash functions are involved. The necessity of adaptive semi-

logarithm resistance has not yet been established. Clearly, adaptive semi-

logarithm resistance is a strengthening of semi-logarithm resistance, a known

necessary condition.

Pseudo-Random Random Number Generator : The security proofs

generally assume that the static key x and the ephemeral private keys k, if

30 II. ON THE PROVABLE SECURITY OF ECDSA

any, are generated uniformly and independently at random from the private

key space Z/qZā— .

In practice, this means that the private keys can be generated with a

pseudo-random number generator, which is something whose output is indis-

tinguishable from private keys generated truly randomly as above. Indeed,

if the schemes were broken when instantiated with such a pseudo-random

number generator, then the pseudo-random number generator can be distin-

guished from a true random number generator as follows. Run the forgery

attack on ECDSA instantiated by the challenge number generator. If the

forgery attack succeeds, then the challenge is likely to come from a pseudo-

random generator; if the forgery attack fails, then the challenge number gen-

erator is likely to be truly random.

Obviously, when considering passive forgery, the generator for ephemeral

keys is never invoked, so there is no condition on how k is generated. The

generation of x is of course still vital.

If the signatures are generated in a deterministic mode, meaning that k

is a secret deterministic function of the message being signed, then pseudo-

randomness only applies on a per-message basis, not a per-signature basis.

Some of the proofs really only apply in this deterministic mode.

Let H : {0, 1}ā— ā’ Z/qZ be the

Uniform (Smooth) Hash Function :

eļ¬ective hash function. Let U ā {0, 1}ā— be such that

1. For m āR U , e = H(m) can be eļ¬ciently generated.

2. For each e ā Z/qZ, the set Pe = hā’1 (e) ā© U is suļ¬ciently large so that,

even if Pe is known, the probability 1/|Pe | is suļ¬ciently small to make

guessing a randomly selected secret element of Pe infeasible.

Let b āR {1, 2}. A probabilistic algorithm Dh is an ( D , Ļ„D , U )-distinguisher

for h if Dh accepts as input e āR Z/qZ if b = 1 or e = H(m) for m āR U

if b = 2, and if Dh outputs, in running-time at most Ļ„D a guess d ā {1, 2}.

Further, d = b with probability at least 1 + D assessed over the random

2

choices of b, e and Dh . If no such Dh exists, then h is uniform or smooth of

strength ( D , Ļ„D , U ).

Uniformity (or smoothness) can be exhibited by some very simple āhashā

functions that are neither collision resistant nor one-way. For example, the

function

{0, 1}ā— ā’ Z/qZ,

H:

ā’ x (mod q),

x ĀÆ

where x is the integer represented by the bit-string x, is uniform for U = {x :

ĀÆ

0 ā¤ x < q 2 }. Thus, it is highly plausible that the eļ¬ective hash function for

ECDSA is also very uniform for an appropriate choice of U if H is built from

a cryptographic hash function.

A uniform, collision resistant hash is one-way. To see this directly, sup-

pose there was an inverter for a uniform hash function H. Select a random

II.2. DEFINITIONS AND CONDITIONS 31

message m āR U and compute e = H(m). Give e to the hash-inverter.

The hash-inverter outputs some message m , such that H(m ) = e = H(m).

Thus, (m , m) is a collision, unless m = m. But it is information-theoretically

infeasible to ļ¬nd m from e alone, so m = m and a collision is found. More-

over, because e is indistinguishable from random elements in Z/qZ, the hash-

inverterā™s probability cannot be aļ¬ected by the choice of e or else it would

be a distinguisher. Thus the collision ļ¬nder has the same success rate and

eļ¬ciency as the hash-inverter. Thus, when our security results use collision

resistance and uniformity as hypotheses, one-wayness is an implicit additional

hypothesis [321].

Ideal Group Model : The ideal group model, or generic (group) model

was introduced into cryptography by Shoup [303]. A version of this model

was applied to ECDSA by Brown [52]. In the ideal group model, the group

operation is accessed only via an oracle, which accepts queries of two elements

that will be subtracted. The representations of group elements are random

elements of a subset. The representations may be thought of as encryptions.

Any practically implementable group is not an ideal group, since a practical

implementation must perform group operations without invoking an oracle.

When the group is modelled as ideal, the adversaryā™s queries to and re-

sponses from the oracle can be analyzed in the following way. Some queries

will be group elements that have not occurred in earlier queries or responses.

These elements can be regarded as independent. All other elements can be

regarded as integer combinations of the independent elements. So long as

distinct integer combinations represent distinct elements, the oracle has not

revealed any relations between the independent elements; otherwise, a de-

pendency will have been discovered. Since the independent elements are

eļ¬ectively random elements in the group, it is possible to formulate the prob-

ability of a dependency being discovered, and it is small if the number of

queries is small. By convention, the base point G and public key Y are in-

dependent. If the number of queries is small, then the probability that a

dependency such as Y = [x]G, equivalent to ļ¬nding the private key, could

be discovered is small. When a group is modelled as ideal, it is possible to

prove, as Shoup did [303], that certain problems related to the group (such

as the discrete logarithm problem) cannot be solved with less than a certain

number of queries.

Algorithms to solve problems in a group that work for any group, in-

cluding an ideal group, are called generic. Conversely, algorithms to solve

problems in a speciļ¬c group that do not work for other groups, including

an ideal group, are called speciļ¬c. Generic algorithms for solving the dis-

crete logarithm problem cannot be much faster than the square root of the

largest prime factor in group order [303], whereas speciļ¬c algorithms might

potentially work faster for particular groups. A provable security result in the

32 II. ON THE PROVABLE SECURITY OF ECDSA

generic group model only protects against adversaries that are generic with

respect to the group. Adversaries to ECDSA speciļ¬c to elliptic curve groups

are thus not ruled out.

More theoretically, Dent [100] has shown it is also possible that a given

cryptographic scheme with a provable security result in the ideal group model

has the property that there exists a successful adversary against the scheme,

for all eļ¬ciently implementable groups. This adversary is not a generic group

adversary because it is limited to eļ¬ciently implementable groups. This

limitation, though, involves the creation of contrived schemes designed to

allow such adversaries, and thus it is generally believed to be only a theoretical

limitation.

Ideal Hash Model : Also known as the random oracle model or ideal

hash paradigm, the ideal hash model is analogous to the ideal group model.

More precisely, it models the hash function by a random oracle function. Any

security proof in the ideal hash model should be regarded as only a validation

or argument of security with respect to the design of the scheme, irrespective

of the hash function. Moreover, it suļ¬ers from the same severe limitations as

the ideal group model, as shown by Canetti, Goldreich and Halevi [54].

II.3. Provable Security Results

The following provable security results are in the opposite direction of the

results about the necessity of certain conditions on the primitive components

of ECDSA.

Theorem II.7. If the eļ¬ective hash function is rarely zero and the semi-

logarithm resistance holds for the group and conversion function, then ECDSA

has passive selective unforgeability.

The conditions in the hypothesis of this result are also necessary condi-

tions. The passive selective unforgeability of ECDSA is equivalent to these

conditions. The main way this result could be improved is to determine

whether semi-logarithm resistance is equivalent to or strictly weaker than

discrete logarithm resistance.

Theorem II.8. If the hash function is zero-resistant and weakly collision re-

sistant (second-preimage resistant), the conversion function and group are

together adaptive semi-logarithm resistant, and the random number generator

is pseudo-random (indistinguishable from random), then ECDSA has active

selective unforgeability.

Theorem II.9. If the hash function is zero-resistant and one-way, the con-

version function is almost invertible, and the group is modelled as ideal (i.e.,

as a generic group), then ECDSA has passive existential unforgeability.

II.4. PROOF SKETCHES 33

Another result for passive existential unforgeability is a special case of the

next result below for active existential unforgeability. The conditions of this

special case are no stronger or weaker than the conditions of the result above.

Theorem II.10. If the hash function is idealized as a random oracle, then

semi-logarithm resistance implies that ECDSA has active existential unforge-

ability.

Because semi-logarithm resistance is a necessary condition, the result is

tight ā” modulo the random oracle model for the hash function. Using a

random oracle allows us to weaken the adaptive semi-logarithm resistance

used in the result for active selective unforgeability.

Theorem II.11. If the elliptic curve group is modelled as a generic group,

then almost invertibility of the conversion function f , and smoothness, colli-

sion resistance and zero resistance of the hash function h together imply that

ECDSA has active existential unforgeability.

Results in both directions are summarized in Table II.2, which should

help illustrate the gap between the various necessary and suļ¬cient conditions.

The ļ¬rst column indicates the type of security in terms of passiveness and

selectiveness. The second column gives the direction of the result; ā’ to

show that security implies certain conditions and ā to show that security is

implied by certain conditions. The remaining columns are for the underlying

components of ECDSA. Rows with ā’ list all the conditions necessary, and

rows with ā list all the conditions (or models) that together are suļ¬cient.

Notice that the strongest unforgeability type (corresponding to the least

harmful, i.e., an active existential forger) has two results complementary in

the sense of using diļ¬erent idealized models, one idealizing the hash (ROM)

and the other idealizing the group (GEN). The only other security type that

uses an idealized model (GEN) is the passive existential. The proof of active

existential unforgeability using the idealized hash model specializes to passive

existential unforgeability, so this type of security also has complementary

idealizations in provable security results.

II.4. Proof Sketches

II.4.1. Passive Selective Unforgeability. Suppose F is a passive selec-

tive forger and the hash function H is rarely zero. Then we will ļ¬nd a

semi-logarithm to the base G at a random challenge point P as follows. Run

F on a random selected message m and public key Y = [H(m)]P to get a

forgery (r, s). Then (r, H(m)ā’1 s) is the desired semi-logarithm.

II.4.2. Active Selective Unforgeability. Suppose F is an active selective

forger. Then we will either solve the adaptive semi-logarithm problem to the

base G at a random challenge point P , or ļ¬nd a zero or second preimage

of a random challenge message m for the hash function H, as follows. Run

34 II. ON THE PROVABLE SECURITY OF ECDSA

Table II.2. Summary of All Results

Security h f G k

SEL PAS ā’ RZ SLR ā”

ā RZ SLR ā”

SEL ACT ā’ ZFR+WCR SLR NAB

ā ZFR+WCR ASLR PR

EXI PAS ā’ ZFR+OW SLR ā”

ā ZFR+OW AI GEN ā”

EXI ACT ā’ ZFR+CR SLR NAB

ā ROM SLR PR

ā ZFR+CR+S AI GEN PR

PAS Passive SLR Semi-logarithm resistant

ACT Active ZFR Zero-resistant

SEL Selective CR Collision-resistant

EXI Existential GEN Generic group

OW One-way ASLR Adaptive semi-logarithm resistant

RZ Rarely zero ROM Random oracle hash

PR Pseudo-random AI Almost invertible

S Smooth WCR Weakly collision-resistant

F on a random selected message m and public key Y = [H(m)]P . Answer

signature queries mi = m by querying the shifted semi-logarithm oracle on

ei = H(m)ā’1 H(mi ) if ei = 1. If some ei = 1, then stop, having found a second

preimage mi of m. Otherwise, F produces a forgery (r, s) on the message m.

Then (r, H(m)ā’1 s) is the desired semi-logarithm.

II.4.3. Passive Existential Unforgeability. Suppose F is a passive exis-

tential forger. Then we will be able to invert the hash function H at a random

challenge e ā Z/qZ as follows. Run F against a modiļ¬ed generic group ora-

cle. The generic group oracle is modiļ¬ed in that a random preselected oracle

response is designated to take the value f ā’1 (ey/g), where g and y are the

integers such that the forger can observe that the output of the group query

is [g]G + [y]Y (queries not of this form are not selected for modiļ¬cation).

Because e is random and f is almost-invertible, the modiļ¬ed response will be

eļ¬ectively random in Z/qZ, and therefore the success rate of F will remain

unaļ¬ected. If Q is the number of group queries used by F , then 1/Q is the

chance that message forgery will satisfy sā’1 H(m) = g and sā’1 r = y, and in

particular, m is a preimage of e.

II.4.4. Active Existential Unforgeability with Idealized Hash. Sup-

pose F is an active existential forger. Then we will use F to ļ¬nd the semi-

logarithm to the base G of a challenge point P , as follows. Run F against

II.4. PROOF SKETCHES 35

a random oracle hash, modiļ¬ed as follows. A random preselected oracle re-

sponse is designated to take a value e chosen at random from Z/qZ. Because

e is random, the modiļ¬ed response will be eļ¬ectively random in Z/qZ, and

therefore the success rate of F will remain unaļ¬ected. Run the active exis-

tential forger with challenge public key Y = [e]P . If h is the number of hash

queries used by F , then 1/h is the chance that message forgery will satisfy

H(m) = e, in which case we have the semi-logarithm (r, s/e), where (r, s) is

the signature.

In order to handle signature queries, a further modiļ¬cation to the random

oracle hash is used. To answer signature queries for a message mi , just choose

a random pair (xi , yi ) ā Z2 and compute ri = f ([xi ]G + [yi ]Y ), si = ri /yi

q

and ei = xi ri /yi . If the forger queries mi to the hash oracle later in the

run, the oracle responds with ei . To ensure that signature queries can be

answered after hash queries, answer every hash query mi using the method

above. The hash query responses ei are eļ¬ectively random and therefore the

random oracle hash is indistinguishable from a true random oracle, from the

forgerā™s perspective.

Nevertheless, the modiļ¬ed query responses do have a distinguishable im-

pact on the signature query responses because these become deterministic.

This phenomenon also applies to the forking lemma approach. Thus, ar-

guably this security proof only appears to apply to the deterministic mode

of ECDSA. Indeed, intuitive randomized signatures do potentially leak more

information about the private key than deterministic signatures, because an

adversary might somehow be able to exploit multiple diļ¬erent signatures of

a single message to compute information about the private key.

II.4.5. Active Existential Unforgeability with Idealized Group. Sup-

pose that F is an active existential forger, that the hash function H is zero

resistant and smooth, and that the conversion function is almost invertible.

We will use F to ļ¬nd a collision in the hash function H. Run F against a

modiļ¬ed generic group oracle. The generic group oracle is modiļ¬ed in that

each oracle response is designated to take the value f ā’1 (H(mi y/g)), where

mi is chosen at random from U where g and y are the integers such that the

forger can observe that the output of the group query is [g]G + [y]Y (queries

not of this form are not selected for modiļ¬cation). Because H(mi ) is indis-

tinguishable from random and f is almost-invertible, the modiļ¬ed response

will be eļ¬ectively random in Z/qZ, and therefore the success rate of F will

remain unaļ¬ected. Furthermore, because H is smooth, F will not be able to

guess mi . Therefore the forgerā™s signature queries mi and forged message m

are distinct from the messages mi . In order for the signature to verify with

chance better than random, it would need to have one of the queries involving

mi and therefore H(m) = H(mi ), which is the collision desired.

36 II. ON THE PROVABLE SECURITY OF ECDSA

II.5. Further Discussion

II.5.1. Semi-Logarithms Versus Discrete Logarithms. The discrete

logarithm problem is traditionally considered the primary basis for the secu-

rity of ECDSA. But the semi-logarithm problem is considered in this chapter

because it is not obvious whether it is equivalent to or weaker than the discrete

logarithm. The security of ECDSA depends on this potentially weaker prob-

lem. The security of ECDSA will be clariļ¬ed once the relationship between

these two problems is established.

If it is ever proved that ECDSA is secure if the elliptic curve discrete log-

arithm is intractable, then this would establish the equivalence of the elliptic

curve semi-logarithm and discerete logartihm problems. This is true even if

the security proof uses the random oracle model for the hash function. Also,

elliptic curve experts can study the equivalence of these two problems without

looking at ECDSA or hash functions.

II.5.2. Applying the Results to ECDSA Variants. Of course, a number

of variants of ECDSA are possible, and it is worthwhile to consider brieļ¬‚y

whether the results apply to these variants.

Hashing kG : One suggested variant of ECDSA includes the ephemeral

public key [k]G as part of the input to the message digesting hash function.

In the random oracle model idealizing for hash functions, this adds to the

provable security because the active existential unforgeability of ECDSA can

be proved relying on the discrete logarithm problem, not the semi-logarithm.

On the other hand, under diļ¬erent hypotheses this variation diminishes the

provable security, as follows. Some security proofs for ECDSA no longer

apply. The proof of active selective unforgeability that idealizes neither hash

nor group, for example, does not apply. The obstacle to these proof techniques

seems to result from the diļ¬culty of separating the roles of the group and

the hash in the variant. Furthermore, this also exempliļ¬es the principle that

design complexity generally hinders provable security.

DSA and One-Way Conversion Functions : Almost-invertibility does

not hold for the DSA conversion function. Indeed, in DSA, the conversion

function is probably a good one-way function, which is quite the opposite

of almost-invertible. Therefore, the provable security results using almost-

invertibility of the conversion function do not apply well to DSA. Therefore,

DSA and ECDSA have diļ¬erent provable security properties. In particular,

they are not as analogous as at ļ¬rst they seem.

One-wayness, however, would intuitively seem to add security. The situa-

tion seems paradoxical. One explanation is that almost-invertibile means the

function is similar to the trivial identity function, and the identity function

II.5. FURTHER DISCUSSION 37

is very simple to analyze. A more complex design might be more secure, al-

though it could also be more complex to analyze. With added complexity, one

can never discount the possible appearance of an attack. For DSA, itā™s possi-

ble that somebody could attack the conversion function. For example, DSA

could be insecure because the conversion function used is not almost-bijective

or for some other reason. One could assume that the DSA conversion function

is almost-bijective and try to ļ¬nd a provable security result, but nobody has

done this yet.

The intuition that a one-way conversion function imparts some kind of se-

curity attribute is not entirely ungrounded. Almost-invertibility means that

the public key can be recovered from the message and signature (with rea-

sonable probability). A one-way conversion function seems to prevent this.

This diļ¬erence does not have an impact on GMR security. It could have

other impacts such as anonymity (hiding the signerā™s identity) or eļ¬ciency

(omitting the public key). Hiding the public key is not a stated objective of

ECDSA.

Non-Pseuodrandom k : No result has shown that k needs to be indistin-

guishable from a uniform random integer in [1, q ā’ 1]. Indeed, since ECDSA

is not meant to provide conļ¬dentiality, the need for indistinguishability is

not clear. Intuitively, a weaker condition than pseudo-randomness ought to

be suļ¬cient for ECDSA. Certainly, the private keys must be unguessable

and arithmetically unbiased, because of known attacks, but these are weaker

conditions than pseudo-randomness.

To see why pseudo-randomness might not be necessary for k, consider

the following. Choose truly random private keys k subject to the condition

that their hashes display a given pattern. Such k fail to be pseudo-random

because they can be distinguished by applying the hash function, yet they

do not seem to be weak. They are unlikely to have an attackable arithmetic

bias. They may have enough entropy to be unguessable.

Also, some of the results do not involve a signing oracle and therefore do

not require the ephemeral private keys k to be generated pseudo-randomly.

Deterministic k : In some of the proofs, the signing oracle value has

the property that the same message query always gives the same signature

response. Technically, this means the proof is only applicable to the deter-

ministic mode of ECDSA signing, where k is chosen as a secret deterministic

function of the message m being signed. An intuitive explanation that the

deterministic mode is more secure is that it reveals less signatures and theref-

ere less information about the private key. A very cautious implementation

of ECDSA could use the deterministic mode so that these provable security

results apply.

38 II. ON THE PROVABLE SECURITY OF ECDSA

II.5.3. Attack-Like Attributes of ECDSA. Despite the proofs of GMR

security of ECDSA, it might be argued that GMR security itself is not the

ārightā deļ¬nition. Logically speaking, of course, a deļ¬nition, by deļ¬nition,

cannot be right or wrong. Nonetheless, cryptology is a practical science, not

a purely mathematical one, and therefore deļ¬nitions ought to be tailored to

pertinent concerns, not purely arbitrary ones. With this perspective, some al-

ternative deļ¬nitions of security for signatures in which ECDSA can be deemed

āinsecureā are explored and assessed for their pertinence.

Many of the attributes that we explore hinge on the particular conversion

function f used in ECDSA. Altering f to avoid these attributes could po-

tentially do more harm than good, diminishing the reputational security of

ECDSA and the provable security of ECDSA. Accordingly, addressing these

attributes is best handled through other means.

Signature Non-Anomyity : Given a valid ECDSA signature (r, s) on

message m, the associated public key Y can be recovered, as follows. (Note

that this does not violate the GMR deļ¬nition of signature security.) Solve

for the public key as Y = [rā’1 ]([s]R ā’ [H(m)]G), where R is selected from

f ā’1 (r), the set of points in the preimage of r.

Self-Signed Signatures : A signature of a message is self-signed if the

message contains the signature. A self-signed ECDSA signature can be gen-

erated as follows. Choose random k and s. Compute r = f ([k]G). Form the

message m containing the signature (r, s). Compute e = H(m). Now solve

for a private key x that makes this signature valid, which can be found as

x = (Ā±sk ā’ e)/r (mod q).

This attribute does not violate GMR security. Indeed, it may be a useful

attribute in the sense that it can be used to ensure that the private key was

not stolen. It may also be useful for server-assisted key generation, where a

server adds entropy to the message m so the signerā™s private key x has enough

entropy. Additional modiļ¬cations to the self-signed siganture veriļ¬cation are

necessary, however, if the server cannot be trusted and the signerā™s entropy

for k is weak.

Unknown Private Key : A valid ECDSA signature can be generated

without knowing the private key and yet not violate the GMR deļ¬nition

of signature security, as follows. This can be done for any elliptic curve

domain parameters and any message m, by ļ¬rst generating a random value

of the signature (r, s) and then solving for the public key as Y = [rā’1 ]([s]R ā’

[H(m)]G), where R ā f ā’1 (r), the set of points in the preimage of r. If

f ā’1 (r) = {}, then just try another value of r.

II.5. FURTHER DISCUSSION 39

This attribute does not defeat GMR security because it does not attack

a target public key. This āattackā could be thwarted if f were made one-

way, but then the provable security oļ¬ered by the proofs assuming almost-

invertibility of f would be lost.

Invalid Public Key : An ECDSA signature that can be veriļ¬ed can be

generated for as many messages as desired after committing beforehand to

a public key Y , as follows. Commit to an invalid public key Y of small

order a (for this work, the veriļ¬er is to use Y without validating it). Given

message m, generate random s and t. Compute r = f ([H(m)]G + [t]Y ). If

r = t (mod a), then (r, s) will be valid.

This attribute does not defeat GMR security because it is the adversary

that chooses the public key, as in the previous example, and moreover, the

public key is invalid. Notwithstanding that this attribute is not a real attack

against a signer, the veriļ¬er is well advized to validate all public keys, because

it would be rather foolish to rely on invalid public keys.

Duplicate Signatures : An ECDSA signature that is valid for two given

messages m1 and m2 can be generated, as follows. Choose random k, then

compute R = [k]G, r = f (R), x = (2r)ā’1 (H(m1 ) + H(m2 )) (mod q) and Y =

[x]G and s = k ā’1 (H(m1 ) + xr) (mod q). This works because f (P ) = f (ā’P ).

This attribute of ECDSA does not defeat GMR security because it requires

generation of the public key, as in both the previous examples. One can argue

duplicate signatures qualify as an āattackā defeating non-repudiation. Then

one would conclude that GMR security does not lead to non-repudiation.

On the other hand, to repudiate a signature of a message m2 , on the

grounds that it is a duplicate, is to argue that signer intended to sign m1 and

that some adversary discovered the duplication with m2 . If an adversary could

do this, then GMR security would be defeated, because the signer did not

sign m2 . So the adversaryā™s repudiation argument reduces to claiming that

ECDSA is not GMR-secure. Therefore the break between GMR security and

non-repudiation has not been broken.

This particular duplicate signature attack is of even less of a concern than

a general duplicate signature attack, because it exposes the private key as

x = (2r)ā’1 (H(m1 ) + H(m2 )) (mod q). The hypothetical third-party forger

could therefore compute the private key and discard the duplicate signature

in favour of full-ļ¬‚edged forgeries, drastically weakening the credibility of the

alleged repudiator. Also, exposure of the private key almost certainly shows

that the signer intentionally created the private key solely in order to fraud-

ulently repudiate the signature.

Malleable Signatures : A valid ECDSA signature that was never gener-

ated by the signer with public key Y can be generated given any message m,

40 II. ON THE PROVABLE SECURITY OF ECDSA

as follows. Get the signer to generate a signature (r, s) of the message m, and

then generate (r, ā’s (mod q)) as the desired signature. This works because

f (P ) = f (ā’P ), as in the previous example.

This attribute does not defeat GMR security because the message m has

already been signed by the signer. A forgery, under the GMR deļ¬nition, must

be of a message that was never signed, which excluces m.

This attribute has been called malleability, but perhaps benign malleabil-

ity is more apt, because both terms have precedents in the context of en-

cryption schemes, and this attribute is much more akin to the benign variety,

where the malleability does not impinge on the message itself but merely to

the cryptographic data. Arguably, benign malleability is always present in

that a given piece of data often has multiple encodings, and any non-malleable

scheme can be transformed into a benignly malleable one by applying such

an encoding.

CHAPTER III

Proofs of Security for ECIES

A.W. Dent

Provable security in an encryption setting is very similar to provable se-

curity in a digital signature setting (see Chapter II). In both cases we aim to

make meaningful, mathematically rigorous statements about the security of

cryptosystems and provide proofs that these statements are correct.

Generally, a security proof attempts to show how diļ¬cult ābreakingā a

cryptographic scheme is, in terms of how diļ¬cult it is to solve some math-

ematical problem. If we can show that the diļ¬erence between breaking the

cryptographic scheme and solving the underlying mathematical problem is

only small, and we assume that solving the underlying problem is diļ¬cult

to do, then we can have some measure of assurance in the security of the

cryptographic scheme. The main diļ¬erence between proving security in the

signature setting and in the encryption setting is deciding what is meant by

ābreakingā the scheme.

Before we launch into the complicated and arcane world of provable se-

curity, it is useful to take a moment to consider its history. The ļ¬eld of

provable security for public-key encryption schemes has a history almost as

long as public-key encryption itself. The most signiļ¬cant early papers on

provable security are by Rabin in 1979 [279] and Goldwasser and Micali in

1984 [149]. These papers proposed schemes with provable security properties

based on the properties of modular arithmetic ā“ obviously this was too early

for elliptic curves! For many years after this, provable security remained the

province of the academic. It seemed that most practical cryptosystems were

too complicated for a formal proof of security to be found. The schemes that

did have proofs of security were often thousands of times slower than the ones

that were used in practice. It took two revolutionary steps to bring provable

security back into the province of the practical cryptographer. The ļ¬rst in-

volved setting out the correct model for an attack. This was essentially done

by Rackoļ¬ and Simon in 1991 [280]. The second was the formalization of

the random oracle model by Bellare and Rogaway in 1993 [24]. The random

oracle model involves assuming that any hash functions are ideal, i.e., act as

totally random functions, and it allows for a much easier analysis of a scheme.

Throughout this chapter we will be considering the ECIES encryption

scheme; see Section I.4. ECIES provides a very good example of the advan-

tages and disadvantages of the provable security approach to cryptography.

41

42 III. PROOFS OF SECURITY FOR ECIES

The scheme was one of the ļ¬rst formalized versions of a hybrid encryption

scheme, that is, an asymmetric encryption scheme that uses both asymmetric

and symmetric cryptographic techniques. Essentially, ECIES uses an ellip-

tic curve Diļ¬eā“Hellman key transport to generate a random symmetric key,

which is used to encrypt and MAC a message.

The idea of hybrid encryption had been folklore amongst the crypto-

graphic community for years but had never been formalized. This led to some

weak implementations of the ideas (see, for example, the attack of Boneh,

Joux and Nguyen [41]). ECIES was introduced as a concrete example of a

hybrid encryption scheme, and it came with a security proof that guaranteed

that, providing the scheme was used in the right way, the scheme could not

be broken. On top of this, because the idea of hybrid encryption had been

known to the cryptographic community for so long, the scheme also had a

high level of reputational security. It is not surprising then that it quickly

found its way into many mainstream cryptographic products and standards.

III.1. Deļ¬nitions and Preliminaries

III.1.1. Security for Encryption. To be able to make any meaningful

mathematical statements about the security of an encryption scheme, we

ļ¬rst have to be very clear about what constitutes an encryption scheme.

Definition III.1. A public-key encryption scheme is a triple of algorithms

(G, E, D) where

ā¢ G is a probabilistic algorithm called the key generation algorithm. It

takes no input (except randomness) and outputs a key-pair (pk, sk).

Here pk is the public key and must be distributed to everyone who wishes

to use the encryption algorithm, and sk is the private key and should

only be made available to those parties who have permission to decrypt

messages.

ā¢ E is a probabilistic algorithm called the encryption algorithm. It takes

as input the public key pk and a message m drawn from some message

space M deļ¬ned by the public key. It outputs a ciphertext C in some

ciphertext space C, also deļ¬ned by the public key.

ā¢ D is a deterministic algorithm called the decryption algorithm. It takes

as input the private key sk and a ciphertext C from the ciphertext space

C and it returns either a message m ā M or an error symbol ā„.

A public-key encryption scheme is sound if for all valid key-pairs (pk, sk)

we have that D(E(m, pk), sk) = m for every message m ā M. If E is a

deterministic algorithm, then (G, E, D) is said to be a deterministic encryption

scheme (even though G is still a probabilistic algorithm).

An attacker for a public-key encryption scheme is a pair of probabilistic

algorithms A = (A1 , A2 ). The precise nature of the attacker depends upon

two factors: (1) what the attacker is trying to do, and (2) what access the

III.1. DEFINITIONS AND PRELIMINARIES 43

attacker has to the scheme. These days we only really consider attackers who

are trying to do one of two things: either an attacker is trying to invert a

ciphertext to ļ¬nd the message it represents (the one-way game) or they are

trying to tell which of two messages a ciphertext is the encryption of (the

indistinguishability game).

In both cases the attacker plays a game against a challenger. The chal-

lenger isnā™t a real person or even a computer program, it is simply a conve-

nient way of describing the way in which the attackerā™s inputs are constructed.

Take, for example, the one-way game. Here a challenger picks a message uni-

formly at random from the set M of all possible messages, encrypts it, and

gives the resulting ciphertext to the attacker to attempt to decrypt. In re-

ality the āchallengerā does not exist ā“ who would implement a cryptosystem

and include a program that just encrypts random messages for no appar-

ent purpose? ā“ it is simply a convenient way for us to explain the precise

mathematical model that we used to assess the security of the scheme.

Definition III.2. The one-way (OW) game for an attacker A = (A1 , A2 )

consists of four major steps:

1. A challenger generates a random key-pair (pk, sk) by running the key

generation algorithm G.

2. The attacker runs A1 on the input pk. It returns some state informa-

tion s.

3. The challenger chooses a message m uniformly at random from the

message space M. It computes the challenge ciphertext C ā— = E(m, pk).

4. The attacker runs A2 on the input (C ā— , pk, s). It returns a guess m

for m.

The attacker wins the game if m = m.

The one-way game is a fairly weak notion of security. There are many

schemes that are secure against attackers playing the one-way game that are

still not secure enough to be used in practice. The indistinguishability game

is a much stronger notion of security.

In the indistinguishability game the attacker is asked to provide two mes-

sages (usually termed m0 and m1 ). The challenger picks one of these mes-

sages at random and encrypts it, giving the resulting ciphertext C ā— back to

the attacker. The attacker then has to guess which message the challenger

encrypted. This may initially seem to be quite easy for the attacker to do:

surely the attacker can just encrypt both of the messages and compare their

encryptions to C ā— ? Well, of course the attacker can do this but this may

not help. We must remember here that the encryption algorithm may be

probabilistic, which means that if the same message is encrypted twice we

are unlikely to get the same ciphertext both times, and knowing one encryp-

tion of a message may not help us recognise another encryption of the same

message.

44 III. PROOFS OF SECURITY FOR ECIES

Definition III.3. The indistinguishability (IND) game for an attacker A =

(A1 , A2 ) consists of four major steps:

1. A challenger generates a random key-pair (pk, sk) by running the key

generation algorithm G.

2. The attacker runs A1 on the input pk. It returns two messages m0 and

m1 , as well as some state information s.

3. The challenger chooses a bit Ļ ā {0, 1} uniformly at random. It com-

putes the challenge ciphertext C ā— = E(mĻ , pk).

4. The attacker runs A2 on the input (C ā— , pk, s). It returns a guess Ļ for

Ļ.

The attacker wins the game if Ļ = Ļ.

The advantage of the attacker A in playing the IND game is deļ¬ned to be

AdvA = |P r[Ļ = Ļ] ā’ 1/2| .

The advantage of an attacker is a measure of how much better the at-

tacker A is than the attacker who simply guesses Ļ at random. An attacker

who guesses Ļ uniformly at random has a success probability of 1/2 and an

advantage of 0. It should be noted that some authors prefer to deļ¬ne the

advantage of an attacker to be twice this value so that it is nicely scaled as a

value between 0 and 1.

Another aspect that frequently causes some confusion is the state infor-

mation s. The state information s that A1 passes to A2 can contain any

information that may be of help to A2 . This could include the messages m0

and m1 or information about the way in which A1 chose them, information

about the decryptions of certain messages or any other information that A1

thinks may be of use to A2 in winning the game. It is nothing mysterious,

it is simply a way of making sure that A2 knows what A1 was doing when it

chose m0 and m1 .

The indistinguishability game is a strong notion of security. Suppose there

was some algorithm A that could, given a ciphertext, deduce whether the

decryption of that ciphertext would pass or fail some test T , the test T could

be something like whether the last bit of the message is 0 or 1, or whether

the sum of the bits in the message is even or odd. We could then construct

an attacker that could win the indistinguishability game by choosing the two

messages m0 and m1 such that m0 passes the test T and m1 fails the test T .

The attacker could then tell if the challenge ciphertext is an encryption of m0

or m1 by running the algorithm A to see if the decryption of the challenge

ciphertext would pass or fail the test T .

This means that if a cryptosystem is secure against attackers playing the

indistinguishability game, then an attacker can gain no meaningful results

(i.e., any information about whether it would pass or fail any kind of test)

about a message from its encryption.

III.1. DEFINITIONS AND PRELIMINARIES 45

Next we must consider what access an attacker has to the scheme. Ob-

viously the attacker must have access to a description of the algorithms in

the scheme and the public key pk. This means that an attacker will be able

to encrypt messages for himself. However, just as we allowed an attacker to

request the signatures of certain messages when considering the security of

digital signatures, we may allow an attacker to decrypt certain ciphertexts of

his choice. Obviously care must be taken to make sure that the attacker isnā™t

allowed to decrypt the challenge ciphertext!

An attacker requests the decryption of a ciphertext C by outputting C and

then entering a special freeze state. The attacker is then given the response to

his query and re-started. In this way we can think of an attacker A = (A1 , A2 )

as consisting of two algorithms A1 and A2 that each work in the same way

as the forger for a digital signature (see Deļ¬nition II.3). The ļ¬rst algorithm

A1 runs in multiple rounds, each consisting of two plays, the ļ¬rst made by

the challenger and the second by the attacker. In Round 0, the challenger

starts by generating a valid key-pair (pk, sk). A1 is then given the public

key pk and some predetermined initial state X0 . It then outputs a state X1

and either a request for the decryption of a ciphertext C or, in the case of

the indistinguishability game, two messages (m0 , m1 ). In the case of the one-

way game, A1 halts without any extra output when it terminates. If A1 has

requested the decryption of a ciphertext, then the challenger computes the

decryption m of C using the private key sk and re-starts A1 with state X1

and input m. The second algorithm A2 works similarly but initially takes as

input the challenge ciphertext C ā— and an initial state Xi that was given by

A1 in its ļ¬nal round (this is equivalent to the state information s). When A2

terminates it must output its guess for either Ļ (in the indistinguishability

game) or m (in the one-way game).

Whilst this is one way of thinking about an attackerā™s ability to request

decryptions, it will suit our purposes to be a little more relaxed and merely

to think of an attacker having the ability to ask a god-like oracle for them.

If an attacker is allowed to request decryptions, then it is said to have access

to a decryption oracle, and these requests are assumed to take a single unit

of time to answer (such is the power of the oracle).

There are three models that are used to characterise an attackerā™s access

to a decryption oracle.

Definition III.4. Consider an attacker A = (A1 , A2 ).

If the attacker has no access to a decryption oracle, then it is said to

be running a chosen plaintext attack (CPA), because it has the ability to

choose which plaintexts (messages) it wishes to encrypt. Remember, it knows

the public key and can therefore encrypt any message it wants to. Here the

attacker cannot request the decryption of any ciphertext.

If the ļ¬rst algorithm A1 has access to a decryption oracle (i.e., can request

the decryption of ciphertexts) but the second algorithm A2 has no access to a

46 III. PROOFS OF SECURITY FOR ECIES

decryption oracle then the attacker is said to be running a chosen ciphertext

attack (CCA1).1

If both the ļ¬rst algorithm A1 and the second algorithm A2 have access

to a decryption oracle, then the attacker is said to be running an adaptive

chosen ciphertext attack (CCA2). In this case we must assume that the

second algorithm only has access to an imperfect decryption oracle that will

not decrypt the challenge ciphertext C ā— . This is the strongest notion of access

that we will consider here.

It is easy to see that if an attacker can break a scheme using a chosen

plaintext (CPA) attack, then there is an attacker who can break that scheme

with a chosen ciphertext (CCA1) attack. Hence, if we can show that a scheme

is secure against attackers running CCA1 attacks, then we can be sure that

that scheme is also secure against attackers running CPA attacks. Similarly,

a scheme that resists attacks made by CCA2 attackers is also resistant to

CCA1 attackers and CPA attackers.

To completely deļ¬ne the level of security that an algorithm has we must

look at the best attackers in a certain model, i.e., we must specify whether

the attacker is playing the indistinguishability game or the one-way game and

what kind of access the attacker has. We often abbreviate these deļ¬nitions

to their initials. For example, attackers playing the indistinguishability game

using adaptive chosen ciphertext attacks are often referred to as IND-CCA2

attackers and are said to be making an IND-CCA2 attack.

These days, it is generally agreed that a public-key encryption scheme

is only secure if it resists attackers making IND-CCA2 attacks, and it is in

its resistance to this kind of attack that we will be examining ECIES. More

information about attack models can be found in [20].

III.1.2. Underlying Mathematical Problems. Up to now cryptogra-

phers have been unable to ļ¬nd security proofs that prove the security of

an algorithm directly (without the need to rely on any assumptions or sim-

pliļ¬cations). There are several good reasons for this. The ļ¬rst has to do with

the very nature of public-key cryptography. The way public-key cryptogra-

phy works means that it is impossible to ļ¬nd a proof of security in which the

attacker has access to inļ¬nite computational resources ā“ given the public key

of an algorithm, there can only be one corresponding private key, and so an

attacker with unlimited time and computational resources could check each

possible private key in turn until he ļ¬nds the correct one (thus breaking the

1

Attacks that use CCA1 access to a decryption oracle are also sometimes known as

midnight attacks or lunchtime attacks because it is the sort of attack that could be used

by an intruder who breaks into an oļ¬ce either at night or whilst the proper occupant is

at lunch. If the intruder was unable to recover the occupantā™s private key directly, then

he might try and run some kind of program that makes use of the proper occupantā™s

decryption rights and that would henceforth allow the intruder to decrypt messages meant

for the occupant without having to know his private key.

III.1. DEFINITIONS AND PRELIMINARIES 47

scheme). Therefore, if we wish to make any meaningful statement about the

security of a scheme, we must place some kind of bound on the computational

power of the attacker.

Even with this restriction we still run into problems. Any eļ¬cient proof of

security that guaranteed the security of an algorithm in concrete terms would

most likely prove the complexity-theoretic conjecture that P=NP. Whilst this

is widely believed to be true, and despite many years of study, a proof has

not yet been found. Any such proof would be a major step forward in the

ļ¬eld of complexity theory. Therefore the best that one can reasonably hope

for from a proof of security is that it relates the security of a scheme to the

diļ¬culty of solving some kind of underlying mathematical problem that is

believed to be diļ¬cult to solve. In other words, even though we use the term

āsecurity proofā, our actual level of faith in the security of a scheme depends

upon how diļ¬cult we believe the underlying problem is.

There are three major underlying problems that have traditionally been

used in proving the security of elliptic curve based cryptosystems. These

are all based on the reputational security of the ECDH protocol (see Section

I.3). The three traditional underlying problems are the computational Diļ¬eā“

Hellman problem (CDH), the decisional Diļ¬eā“Hellman problem (DDH) and

the gap Diļ¬eā“Hellman problem (gap DH).

Definition III.5. Let G be a cyclic group with prime order #G (and with

the group action written additively), and let P be a generator for G.

The computational Diļ¬eā“Hellman problem is the problem of ļ¬nding [ab]P

when given ([a]P, [b]P ). We assume that a and b are chosen uniformly at

random from the set {1, . . . , #G ā’ 1}.

The decisional Diļ¬eā“Hellman problem is the problem of deciding whether

[c]P = [ab]P when given ([a]P, [b]P, [c]P ). We assume that a and b are

chosen uniformly at random from the set {1, . . . , #G ā’ 1} and c is either

equal to ab (with probability 1/2) or chosen uniformly at random from the set

{1, . . . , #G ā’ 1} (with probability 1/2). The advantage that an algorithm A

has in solving the DDH problem is equal to

|P r[A correctly solves the DDH problem] ā’ 1/2| .

The gap Diļ¬eā“Hellman problem is the problem of solving the CDH prob-

lem when there exists an eļ¬cient algorithm that solves the decisional Diļ¬eā“

Hellman problem on G. In other words, the gap Diļ¬eā“Hellman problem is the

problem of ļ¬nding [ab]P when given ([a]P, [b]P ) and access to an oracle that

returns 1 when given a triple ([Ī±]P, [Ī²]P, [Ī±Ī²]P ) and 0 otherwise. We assume

that a and b are chosen uniformly at random from the set {1, . . . , #G ā’ 1}.

Any algorithm that can solve the CDH problem on a group can also be

used to solve the gap DH problem and the DDH problem. Hence it is better

to try and show that a scheme can only be broken if the CDH problem is easy

to solve, as this automatically tells us that the scheme can only be broken if

48 III. PROOFS OF SECURITY FOR ECIES

both the gap DH and DDH problems are easy to solve too. The assumption

that the CDH problem is hard to solve is called the CDH assumption. The

gap DH assumption and the DDH assumption are deļ¬ned in the same way.

The diļ¬culty of solving the CDH problem on an elliptic curve group is the

same as the diļ¬culty in breaking the ECDH protocol on that elliptic curve

when the attacker has only seen one execution of that protocol. The ECDH

protocol has been around for so long that it is trusted to be secure even

though there exists no formal proof of security. There have been some results

which suggest that the diļ¬culty of breaking the ECDH protocol is related to

the diļ¬culty of solving the elliptic curve discrete logarithm problem, which

is considered hard to solve in most elliptic curve groups (see [ECC, Chapter

V]).

It is interesting that the security of almost all elliptic curve based cryp-

tosystems has been based on the diļ¬culty of breaking the Diļ¬eā“Hellman

protocol because the Diļ¬eā“Hellman protocol is not restricted to elliptic curve

groups: it can be applied to any cyclic group (although the resulting protocol

may not be secure). Up until very recently there have been no underlying

mathematical problems that have been used as a basis for a proof of security

and which are speciļ¬c to elliptic curves alone. Recently, however, cryptosys-

tems have been developed that are based on certain properties of the Weil

and Tate pairings. This will be explained more thoroughly in Chapter IX and

Chapter X.

III.1.3. Security for Symmetric Cryptography. One of the problems

with developing a security proof for ECIES is that it relies on undeļ¬ned

symmetric encryption and MAC schemes. Obviously, if the symmetric en-

cryption scheme is weak and it is possible to derive information about the

message from its symmetric encryption, then no amount of elliptic curve cryp-

tography is going to make the ECIES encryption of the message secure. That

the MAC algorithm is of equal importance takes a little more thought and is

best illustrated by means of an example.

Suppose that the symmetric encryption scheme used in an instantiation

of ECIES is a Vernam cipher. In a Vernam cipher a ļ¬xed-length message is

XORed with a key of equal size in order to create a ciphertext. Decryption of

a ciphertext is given by once again XORing the ciphertext with the key to give

the original message. Suppose further that we remove the MAC algorithm

from the ECIES scheme. This scheme is now deļ¬nitely insecure against CCA2

attacks. Suppose that the challenge ciphertext C ā— = (U ā— , cā— ) is the ECIES

encryption of a message mā— . (How this message is chosen and whether the

attacker is playing the IND or OW game is irrelevant here.) This means

that cā— = mā— ā• k ā— , where k ā— is some symmetric key derived from the secret

key x and U ā— . An attacker can recover k ā— by asking for the decryption of a

ciphertext C = (U ā— , c), where c = cā— . If the decryption of C is m, then the

attacker knows that k ā— = c ā• m, and he can easily recover mā— from cā— and k ā— .

III.1. DEFINITIONS AND PRELIMINARIES 49

However, this attack is not possible when we use a good MAC algorithm

within ECIES as this makes it infeasible for an attacker to ļ¬nd the MAC

r of the symmetric ciphertext c without knowing the MAC key. Hence the

attacker cannot ļ¬nd a ciphertext (U ā— , c, r) that will be decrypted by the de-

cryption algorithm and therefore cannot recover k ā— . The use of a MAC scheme

essentially means that the symmetric encryption algorithm only needs to re-

sist passive attacks, where the attacker does not have the ability to encrypt

and decrypt messages himself. It allows symmetric ciphers that are normally

considered too weak for practical purposes, such as the Vernam cipher, to

be used within ECIES (although readers who are considering implementing

ECIES with a Vernam cipher are referred to Section III.3.3 for a cautionary

tale).

Formally we deļ¬ne a symmetric encryption scheme to be a pair of de-

terministic algorithms (Enc, Dec) with the property that for any message

m ā M and any bit-string K of a predetermined length we have

Dec(Enc(m, K), K) = m .

In deļ¬ning the security of a symmetric encryption scheme, we consider

its resistance to attackers A = (A1 , A2 ) that are playing a game which is

very similar to the IND game we deļ¬ned for attackers against asymmetric

encryption schemes. This game consists of four major steps:

1. A challenger generates a key K totally at random from the set of all

possible keys.

2. The attacker runs A1 . It returns two messages m0 and m1 of equal

size, as well as some state information s.

3. The challenger chooses a bit Ļ ā {0, 1} uniformly at random. It com-

putes the challenge ciphertext C ā— = Enc(mĻ , K).

4. The attacker runs A2 on the input (C ā— , s). It returns a guess Ļ for Ļ.

The attacker wins the game if Ļ = Ļ. The advantage of an attacker in playing

this game is deļ¬ned to be

|P r[Ļ = Ļ] ā’ 1/2| .

This deļ¬nition is taken from [93] and is a weaker form of the āFind-then-

Guessā notion of security contained in [19]. Note that the attacker is not

allowed to request the encryption or decryption of any message ā“ this is

called a passive attack.

It is fairly easy to see that the Vernam cipher satisļ¬es this deļ¬nition of

security; indeed, in this model the Vernam cipher is perfect.

The formal deļ¬nition of the security of a MAC scheme is more like the se-

curity deļ¬nitions associated with a digital signature scheme. A MAC scheme

is a deterministic algorithm MAC that, for any message m and any ļ¬xed-

length bit-string K, produces a ļ¬xed-length output r = MAC(m, K). For

our purposes an attacker is a probabilistic algorithm A that attempts to ļ¬nd

50 III. PROOFS OF SECURITY FOR ECIES

a new MAC for any message. We test an attackerā™s strength by computing

its success probability in playing the following game:

1. A challenger generates a key K uniformly at random from the set of

all possible keys.

2. The attacker runs A. It may, at any time, query an oracle with a

message m. If so, the challenger will compute r = MAC(m, K) and

return r to the attacker.

3. A eventually returns a pair (m , r ).

The attacker wins the game if A has not queried the oracle to ļ¬nd the MAC of

m , i.e., A has not submitted m to the oracle in step 2, and r = MAC(m , K).

III.2. Security Proofs for ECIES

It has become generally accepted that a general-purpose public-key en-

cryption scheme should be resistant to attacks made by IND-CCA2 attackers,

i.e., attackers that are playing the indistinguishability game using adaptive

chosen ciphertext attacks, so we will only consider the security of ECIES

against this kind of attack.

There has never been a simple proof of the security of ECIES. Given that

it uses undeļ¬ned symmetric encryption and MAC schemes, not to mention

an undeļ¬ned key derivation function (the security of which we have not yet

even considered), it is not really surprising that no simple proof has been

found. In this section we will sketch three proofs of security for ECIES; each

one is forced to make some assumption about the way ECIES works in order

to make the security analysis easier.

III.2.1. Using a Non-Standard Assumption. The original proof of se-

curity for ECIES, which was given by Abdalla, Bellare and Rogaway [1], was

based on a non-standard underlying mathematical problem. By this we mean

that it was not based on the diļ¬culty of solving either the CDH problem, the

DDH problem or the gap DH problem. The authors instead chose to make an

assumption based not only on the diļ¬culty of breaking the Diļ¬eā“Hellman

protocol but also on the nature of the key derivation function used. They

called this the hash Diļ¬eā“Hellman problem.

Definition III.6. Let G be a cyclic group with prime order #G (and with the

group action written additively), and let P be a generator of G. Furthermore,

let KD be a key derivation function and let l be a ļ¬xed parameter.

The hash Diļ¬eā“Hellman problem is the problem of deciding whether Ī± =

KD([ab]P, l) when given ([a]P, [b]P, Ī±). We assume that a and b are chosen

uniformly at random from the set {1, . . . , #G ā’ 1} and that Ī± is either equal

to KD([ab]P, l) (with probability 1/2) or chosen uniformly at random from

the set of all l-bit strings (with probability 1/2). To help the attacker, it will

be given access to a hash Diļ¬eā“Hellman oracle: an oracle that, when given

III.2. SECURITY PROOFS FOR ECIES 51

a group element U , will return KD([b]U, l). The attacker will not be allowed

to query this oracle with the input [a]P .

The advantage that an algorithm A has in solving the hash Diļ¬eā“Hellman

problem is equal to

|P r[A correctly solves the hash Diļ¬eā“Hellman problem] ā’ 1/2| .

This deļ¬nition is fairly sneaky. It is not actually a single deļ¬nition but

a family of deļ¬nitions that depend upon your choice of a key derivation

function. The hash Diļ¬eā“Hellman problem may indeed be hard to solve

for many key derivation functions (and choices of l), but it is also possible

that, for a particular choice of key derivation function and l, the hash Diļ¬eā“

Hellman problem is easy to solve. In many ways, adopting the hash Diļ¬eā“

Hellman problem as the basis for a security proof only changes the problem

from proving the security of the cryptosystem to proving that the hash Diļ¬eā“

Hellman problem is hard to solve for your chosen key derivation function.

The security proof for ECIES in this model shows the following:

Result III.7. Suppose A is an IND-CCA2 attacker that breaks ECIES with

a āsigniļ¬cantā advantage, runs in a āreasonableā amount of time and only

makes a āreasonableā number of queries to the decryption oracle. Then either

ā¢ there exists an algorithm B that solves the hash Diļ¬eā“Hellman problem

with a signiļ¬cant advantage, runs in a reasonable amount of time and

only makes a reasonable number of queries to the hash Diļ¬eā“Hellman

oracle;

ā¢ there exists an attacker C = (C1 , C2 ) that breaks the symmetric cipher

with a signiļ¬cant advantage and runs in a reasonable amount of time;

or

ā¢ there exists an attacker F that breaks the MAC scheme with a signiļ¬-

cant probability and runs in a reasonable amount of time.

The terms āsigniļ¬cantā and āreasonableā all have precise technical deļ¬-

nitions, but it will be unnecessary to consider these in detail. It will suļ¬ce

to think of āsigniļ¬cantā as meaning ālarge enough to be usefulā and āreason-

ableā as ānot too largeā.

We will now sketch the proof.

Suppose we have an IND-CCA2 attacker A = (A1 , A2 ) that breaks ECIES

with a signiļ¬cant advantage, runs in a reasonable amount of time and only

makes a reasonable number of queries to the decryption oracle.

We begin by deļ¬ning an algorithm B that attempts to solve the hash

Diļ¬eā“Hellman problem. At the start of its execution, B is presented with a

triple ([a]P, [b]P, Ī±). The algorithm B runs in several stages:

1. B sets the public key Y = [b]P and runs A1 on the input Y until it

outputs two messages, m0 and m1 , and some state information s.

2. B picks a bit Ļ ā {0, 1} uniformly at random and sets (k1 ||k2 ) =

Ī±. It forms the challenge ciphertext C ā— = ([a]P, cā— , rā— ), where cā— =

52 III. PROOFS OF SECURITY FOR ECIES

Enc(mĻ , k1 ) and rā— = MAC(cā— , k2 ). Note that if Ī± = KD([ab]P, l),

then C ā— is a correct encryption of the message mĻ .

3. Next, B runs A2 on the input (C ā— , Y, s). A2 returns a guess Ļ for Ļ.

4. If Ļ = Ļ, then B returns true (i.e., that Ī± = KD([ab]P, l)), otherwise

B returns false.

The idea behind the proof is that if Ī± = KD([ab]P, l), then the attacker

A will be attacking a valid instance of ECIES and so will have a signiļ¬cant

advantage in guessing Ļ correctly. If Ī± = KD([ab]P, l), then Ī± is completely

random. This means that the message mĻ has been encrypted and the MAC

of the ciphertext computed with random keys. If this is the case, then one of

two things will happen: either (1) the attacker A will still be able to break the

scheme and recover Ļ with a signiļ¬cant probability, in which case the attacker

doesnā™t really care about the keys and must be attacking the symmetric part

of the scheme, or (2) the attacker can no longer break the scheme.

Suppose that an attacker Aā™s advantage in breaking the scheme is signif-

icantly reduced when Ī± = KD([ab]P, l). In this case A is less likely to guess

Ļ correctly if random keys are used to encrypt mĻ and to create the MAC of

the ciphertext, which in turn means that B is more likely to output true if

Ī± = KD([ab]P, l) than if it doesnā™t. In other words, B has a signiļ¬cant ad-

vantage in breaking the hash Diļ¬eā“Hellman problem. So, if we assume that

solving the hash Diļ¬eā“Hellman problem is hard to do, then any attacker that

is trying to break the scheme must still be able to do so when performing the

symmetric encryption and MAC using random keys.

The only problem that remains with the algorithm B is that it has to

be able to respond to the decryption queries that A makes. Normally these

would be answered by a decryption oracle, but since B has no access to a

decryption oracle, it must answer these queries itself. Fortunately this is

fairly easy to do. If A asks for the decryption of a ciphertext (U, c, r) where

U = [a]P , then we may ļ¬nd the relevant symmetric keys k1 ||k2 by querying

the hash Diļ¬eā“Hellman oracle on the input U ; then we may decrypt c and

r as normal. If A asks for the decryption of a ciphertext ([a]P, c, r) with

(c, r) = (cā— , rā— ), then we can use the symmetric keys (k1 ||k2 ) = Ī± to decrypt

c and r as normal. Of course there is a possibility that A1 might request the

decryption of the challenge ciphertext C ā— (remember that A2 is not allowed

to request the decryption of C ā— ), but, in order to do this, A1 would have to

guess that the challenge ciphertext would contain [a]P . Since [a]P is chosen

at random from all the non-identity points of P , and because A is only

allowed to make a reasonable number of decryption queries, this will happen

with such a small probability that we may cheerfully ignore it.

As we mentioned earlier, it is, of course, possible that the attacker A

gained his advantage in breaking ECIES not by determining some informa-

tion about the symmetric keys but by attacking the symmetric part of the

algorithm without knowing those keys. In that case the advantage of the

III.2. SECURITY PROOFS FOR ECIES 53

algorithm B would not be signiļ¬cant. However, then the attacker would still

be able to break the scheme if the keys used by the symmetric encryption

algorithm and the MAC algorithm were completely random.

The idea that we can replace the key used to encrypt the challenge ci-

phertext with a random key means that we can construct an algorithm C

that breaks the symmetric cipher. Remember that here C = (C1 , C2 ) is a

two-stage algorithm that has access to a decryption oracle for the symmetric

cipher with an unknown, randomly selected symmetric key and is trying to

decide which of two messages (m0 and m1 ) a given symmetric ciphertext is

an encryption of (see Section III.1.3). Algorithm C works by using the sym-

metric cipher as part of ECIES and using A to break it. The ļ¬rst algorithm

C1 works as follows:

1. C1 picks a private key x uniformly at random from the set {1, . . . , q ā’1}

and sets the public key Y to be [x]P .

2. C1 runs A1 on the input Y . A1 will return two messages m0 and m1 ,

as well as some state information s.

3. C1 returns the two messages, m0 and m1 , and some state information

s = (s, x).

The challenger then picks a bit Ļ ā {0, 1} uniformly at random and computes

cā— = Enc(mĻ , k1 ), where k1 is some random (unknown) secret key that the

challenger chose at the start. The attacker then runs C2 on the input (cā— , s ).

Algorithm C2 runs as follows.

1. C2 takes s and recovers the secret key x and Aā™s state information s.

2. C2 chooses a random point U ā— in P .

3. C2 chooses a random MAC key k2 and computes rā— = MAC(cā— , k2 ).

4. The challenge ciphertext for A is set to be C ā— = (U ā— , cā— , rā— ).

5. C2 runs A2 on the input (C ā— , Y, s). A2 outputs a guess Ļ for Ļ.

6. C2 outputs Ļ .

Algorithm C makes the problem of breaking the symmetric cipher look

exactly like the problem of breaking ECIES, except for the fact that the

challenge ciphertext was computed using randomly generated symmetric keys

rather than from the keys given by (k1 ||k2 ) = KD([x]U ā— , l). We have already

shown that if Bā™s advantage in solving the hash Diļ¬eā“Hellman problem is not

signiļ¬cant, then A should still be able to break ECIES in this case. Hence C

should have a signiļ¬cant advantage in breaking the symmetric cipher.

Again the only problem we havenā™t explained is how to respond to the

decryption requests A makes. Once again, this is fairly simple. Since C

knows the private key x, it can easily decrypt any ciphertext (U, c, r) where

U = U ā— in the normal way. If A requests a decryption of the form (U ā— , c, r),

then C simply responds with ā˜ā˜invalid ciphertextā™ā™. It does this because

it does not wish to confuse A. All decryption requests for ciphertexts of the

form (U ā— , c, r) should use the same symmetric key to decrypt c as was used

to encrypt the challenge ciphertext cā— ā“ but it does not know what this is!

54 III. PROOFS OF SECURITY FOR ECIES

Hence it is easier for C to just ignore any requests to decrypt ciphertexts of

the form (U ā— , c, r) than to try and guess what the answer should be.

Obviously this means there is a chance that C will respond to a ciphertext

query (U ā— , c, r) by saying ā˜ā˜invalid ciphertextā™ā™ when the ciphertext is

valid and should be decrypted. However, then we must have that c = cā— , as

otherwise r = rā— and A is querying the decryption oracle on the challenge

ciphertext, which would mean that C has succeeded in forging the MAC pair

(c, r). We can use this to build an attacker F that breaks the MAC scheme.

If F does not have a signiļ¬cant success probability, then we know that all

of the decryption oracle responses that A is given by C are correct and so C

should have a signiļ¬cant advantage in breaking the symmetric scheme.

III.2.2. Using an Idealized Key Derivation Function. If one is uneasy

about the validity of a security proof based on a non-standard and unstudied

problem (like the hash Diļ¬eā“Hellman problem), then one can ļ¬nd a proof

of security for ECIES based on the gap Diļ¬eā“Hellman problem if one is

prepared to accept the use of the random oracle methodology [24].

The random oracle methodology is a model in which the key derivation

function is modelled as being perfect, i.e., as a completely random function.

The attacker, who needs to be able to compute the value of the key derivation

function, is given access to this random function by means of an oracle that

will evaluate it for him. Unfortunately this model has been shown to have

some theoretical weaknesses [54]. Nevertheless security proofs constructed

using the random oracle methodology (in the ārandom oracle modelā) are

still considered a very good heuristic guide to the security of a cryptosystem.

The security proof for ECIES in the random oracle model shows the fol-

lowing:

Result III.8. Suppose A is an IND-CCA2 attacker that breaks ECIES with

a signiļ¬cant advantage in the random oracle model, and that A runs in a

reasonable amount of time and only makes a reasonable number of queries to

the decryption oracle. Then either

ā¢ there exists an algorithm B that solves the gap Diļ¬eā“Hellman problem

with a signiļ¬cant probability, runs in a reasonable amount of time and

only makes a reasonable number of queries to the decisional Diļ¬eā“

Hellman oracle;

ā¢ there exists an attacker C = (C1 , C2 ) that breaks the symmetric cipher

with a signiļ¬cant advantage and runs in a reasonable amount of time;

or

ā¢ there exists an attacker F that breaks the MAC scheme with a signiļ¬-

cant probability and runs in a reasonable amount of time.

For the most part the proof is very similar to the proof given in Sec-

tion III.2.1. The only diļ¬erence is in how one constructs the algorithm B.

We now sketch the proof.

III.2. SECURITY PROOFS FOR ECIES 55

Suppose we have an IND-CCA2 attacker A = (A1 , A2 ) that breaks ECIES

with a signiļ¬cant advantage, runs in a reasonable amount of time and only

makes a reasonable number of decryption queries.

At the start of its execution, the algorithm B, which is attempting to solve

the gap Diļ¬eā“Hellman problem, is presented with the pair ([a]P, [b]P ). The

algorithm B runs in several stages:

1. B sets the public key Y to be [b]P and runs A1 on the input Y until it

outputs two messages, m0 and m1 , plus some state information s.

2. B picks a bit Ļ uniformly at random from the set {0, 1} and sym-

metric keys k1 and k2 uniformly at random from the appropriate key

spaces. It forms the challenge ciphertext C ā— = ([a]P, cā— , rā— ), where

cā— = MAC(mĻ , k1 ) and rā— = MAC(cā— , k2 ).

3. Next, B runs A2 on the input (C ā— , Y, s). It returns a guess Ļ for Ļ.

4. Since we are using the random oracle model, the attacker A cannot

evaluate the key derivation function itself and must instead ask B to

evaluate it. Hence B knows all the elliptic curve points on which A

has queried the key derivation function. B searches through all these

points until it ļ¬nds one, Q say, such that Q = [ab]P . Note B can use

the DDH oracle to check this as ([a]P, [b]P, Q) will be a correct solution

to the DDH problem.

5. If such a point Q exists, then B returns Q; otherwise, B returns a point

picked at random.

The idea behind the proof is that if A has not queried the key derivation

function on [ab]P , then it has no clue about the symmetric keys used to en-

crypt the challenge ciphertext. Because the key derivation function is totally

random, querying the key derivation function on other points will not help

you gain any information about KD([ab]P, l). Hence if the attacker A has

not queried the key derivation function on [ab]P , then we might as well be

using completely random symmetric keys (which, in fact, B is doing) and the

attack of A will work just as well. On the other hand, if the attacker A does

query the key derivation function on the input [ab]P , then B will have solved

the gap Diļ¬eā“Hellman problem.

This essentially puts us in the same situation as last time: either B has a

signiļ¬cant probability of solving the gap Diļ¬eā“Hellman problem or Aā™s attack

will work even if we use random keys. The rest of the proof (constructing C

and F) is exactly as in the previous section.

Unfortunately we are not quite ļ¬nished. The devil of this proof is in the

details of how B answers Aā™s oracle queries. Remember that A can now ask

two diļ¬erent types of oracle queries. It can request the decryption of some

ciphertext or it can request that the key derivation function be evaluated on

some input. Furthermore, the two diļ¬erent types of query have to be con-

sistent with each other: if A requests the decryption of a ciphertext (U, c, r),

then we must make sure that the symmetric keys we use to decrypt c and r are

56 III. PROOFS OF SECURITY FOR ECIES

the same symmetric keys that A would get if we queried the key derivation

oracle on [b]U (and donā™t forget that B doesnā™t know the value of b!).

We get around this by making a long list of the queries that A makes.

Each entry on the list is a triple (U, T, K), where U and T are elliptic curve

points:

ā¢ U is the elliptic curve point contained in the ciphertext.

ā¢ T is the input to the key derivation function, so T should equal [b]U .

ā¢ K is the output of the key derivation function, i.e., the keys to be used

with symmetric encryption and MAC schemes.

If A requests the decryption of a ciphertext (U, c, r), then B responds in

the following way:

1. If there is an entry of the form (U, T, K) or (U, ā’, K) on the query list,

then we use the symmetric keys K = (k1 ||k2 ) to decrypt c and r and

return the correct message.

2. Otherwise we check all the entries of the form (ā’, T, K) to see if T =

[b]U . We can use the DDH oracle to do this, since if T = [b]U , then

(U, Y, T ) is a solution to the DDH problem. If such an entry (ā’, T, K)

is found, then we replace it with the entry (U, T, K) and decrypt c and

r using the symmetric keys K = (k1 ||k2 ).

3. If neither of the above two approaches works, then we have no infor-

mation at all about the symmetric keys that should be used. Therefore

we can randomly generate K = (k1 ||k2 ) (as the keys are the output of

a totally random function) and use this to decrypt c and r. We then

add (U, ā’, K) to the query list.

Similarly, if A requests that the key derivation function be evaluated on the

input T , then B responds in the following way:

1. If there is an entry of the form (U, T, K) or (ā’, T, K) on the query list,

then we return K.

2. Otherwise we check all entries of the form (U, ā’, K) to see if T = [b]U ,

as before, we can use the DDH oracle to do this. If such an entry is

found, then we replace it with the entry (U, T, K) and return K.

3. If neither of the above two approaches works, then we can randomly

generate K, add (ā’, T, K) to the query list and return K.

Essentially what we have done here is shown that if the gap Diļ¬eā“Hellman

problem is diļ¬cult to solve and the key derivation function can be modelled

as a random oracle, then the hash Diļ¬eā“Hellman problem is diļ¬cult to solve

too. This is why this proof is so similar to the last one.

III.2.3. Using an Idealized Group. Whilst the previous two proofs made

assumptions about the properties of the key derivation function in order to

simplify the security analysis of ECIES, the proof we will discuss in this sec-

tion makes assumptions about the group over which ECIES is deļ¬ned. We

will assume that the group is ideal, i.e., that the representations of group

III.2. SECURITY PROOFS FOR ECIES 57

elements as bit-strings has no relation to the structure of the group. This

means that any attacker who wishes to break the scheme cannot take advan-

tage of any special arithmetic properties of the group on which the scheme

is deļ¬ned. This idea is the same as the one used to prove the security of

ECDSA in Section II.4.4; however, the way in which we use this idea will be

slightly diļ¬erent.

Speciļ¬cally, we will be using the version of the generic group model pro-

posed by Shoup [303]. Here we assume that the group elements are not

represented as elliptic curve points but as randomly generated bit-strings of

a given length. Since the group elements are now represented by strings of

random bits, an attacker cannot take advantage of any of the usual properties

of elliptic curves when attacking the scheme. Unfortunately the attacker can

not add or subtract group elements either, so we are forced to give him access

to an oracle that will perform these operations for him.

This may all seem a bit contradictory. After all, ECIES is explicitly de-

ļ¬ned over an elliptic curve group and an attack that takes advantage of the

arithmetic properties of the elliptic curve group to break ECIES is still a

valid attack against the cryptosystem. The aim of a proof of security in

the generic group model is to provide evidence that any attack against the

scheme would have to exploit some special feature of elliptic curve groups.

Since ļ¬nding such a special feature would be a major step forward in elliptic

curve cryptography, we can have some measure of assurance in the security

of ECIES. Unfortunately it has been shown that, as with the random oracle

model, the generic group model has some theoretical weaknesses [100]. Nev-

ertheless, just as with the random oracle model, security proofs constructed

in this model are still considered a very good heuristic guide to the security

of a cryptosystem.

The proof of security of ECIES in the generic group model was given by

Smart [310] and shows the following:

Result III.9. Suppose A is an IND-CCA2 attacker that breaks ECIES with

a signiļ¬cant advantage in the generic group model, and that A runs in a

reasonable amount of time and only makes a reasonable number of queries to

the decryption oracle. Then either

ā¢ there exists an algorithm B that solves the decisional Diļ¬eā“Hellman

problem with a signiļ¬cant advantage and runs in a reasonable amount

of time;

ā¢ there exists an attacker C = (C1 , C2 ) that breaks the symmetric cipher

with a signiļ¬cant advantage and runs in a reasonable amount of time;

or

ā¢ there exists an attacker F that breaks the MAC scheme with a signiļ¬-

cant probability and runs in a reasonable amount of time.

This result is strengthened by the following result of Shoup [303]:

58 III. PROOFS OF SECURITY FOR ECIES

Result III.10. In the generic group model, there exist no algorithms that

solve the decisional Diļ¬eā“Hellman problem on a cyclic group of large prime

order that have both a signiļ¬cant advantage and run in a reasonable amount

of time.

The proof of Result III.9 is very similar to the previous proofs and is

therefore omitted.

ńņš. 2 |