From: Maaartin on
On Apr 28, 5:11 pm, bmearns <mearn...(a)gmail.com> wrote:
> It does, thanks. It makes sense now and seems pretty powerful. Maybe
> I'm being too paranoid/romantic about the dealing issue; I just don't
> like having to put cards down. Even face down, it's a potential
> security issue (easier for someone to grab, harder for you to pick up
> and shuffle if disturbed), but there's always the case where you don't
> have a convenient work surface. With a two-part deal you can do it in-
> hand, but it's a bit wonky. Basically you would transfer the deck from
> one hand to the other. For a (1,2) deal, you would deal 1 card from
> the left hand to the top of the deck in your right hand, then 2 from
> the left hand to the bottom of the right-hand deck, then back to the
> top with 1, etc.

When dealing on the desk, then deal(2, 3) or alike is much faster than
deal (1, 2). How is it in hand?

I've got an idea: When dealing, you must go through all the cards.
When searching an instance of a card (or both instances), you go throu
many of them. So we could try to join the steps in e.g., some thing
like this:

1. Take 2 (variant: or 3) cards from the top, look at them
2. If the searched card is not there, put them to the bottom (variant:
to another deck) and goto 1
3. Otherwise, start another deck. This way you could perform something
like the double or the riple cat, too.

> I've got another idea for dealing in-hand, but I'm not sure how it
> compares to the existing method. It's really just a repeated
> application of Maaartin's bury technique, with a minor tweak. So you
> look at the top card, convert it to a number [1-13] (just the card's
> numerical value, 11, 12, 13 for J, Q, K), and count that many cards
> down. Take those top cards, /flip them over/ and move them to the
> bottom of the deck. Repeat until you hit the first upside down card,
> then flip the deck back over.
>
> In python (with apologies to FDR, and using numerical decks with ace
> represented by 0):

What is FDR?

> ############################
> def new_deal(deck):
> results = []
> while len(deck) > 0:
> n = (deck[0] % 13) + 1
> top = deck[0:n]
> results = top + results
> deck = deck[n:]
> return results
> ############################

This tranformation (unlike bury) is not bijective, as the position of
the originally top card depends on its value. This may be a good thing
(Merkle-Damgard step is not bijective, too), but it may lead to
information loss, which may be substantiel (since we have a
relativelly small state and this operation gets repeated many times).

> It's not nearly as thorough of a deal as the other routine, but it is
> pretty fast and easy to do (I've actually tested it with cards, this
> time). I think it we have a strong output function like Maaartin said,
> this might be sufficient.

It can be done differently using other transformation to numbers:
clubs=1, diamonds=2, hearts=3, spades=4. This leads to destroying the
sequence more.

> A sample run with an ordered deck:
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
> 24 25
> Becomes:
> 24 25 18 19 20 21 22 23 15 16 17 7 8 9 10 11 12 13 14 3 4 5 6
> 1 2 0
>
> So you can see it keeps a lot of sequences, much worse than Maaartin's
> deal function. But with some other strong components, it might be
> sufficient.

I agree.

> On Apr 27, 10:13 pm, Maaartin <grajc...(a)seznam.cz> wrote:
> > So why not
> > [ t ] B [ i ] A [ h ]
> > ? It keeps the distance between the two instances, but this distance
> > changes every time, when exactly one instance of another letter falls
> > in between them, which is quite often.
>
> Yes, as long as there is some other operation between before the next
> time it is performed (as there currently is, the bury operation), then
> that should be better. If there was no operation in between, then
> feeding the same value twice in a row would just revert it back to
> where it was.

I think we should combine a lot of simple operations, so there always
will be one in between in all my algorithms.

> > > If we use your three part algorithm (shuffle, bury,
> > > shuffle), and then use a variation of the bury routine to choose the
> > > outputs as we previously discussed, I expect we can get good results,
> > > though obviously we'll need to test it to be sure.
>
> > Maybe we could. Having a lot of pairs unbroken is no problem as long
> > as we use a proper output routine.
>
> This is the direction I'm leaning towards right now. If we can make
> each feed nice and quick and simple, and then reinforce it with a good
> output function, I think it would make the whole system much more
> usable.

So do I. I'll have another look at my output routine soon.

> One thing that might be worth point out is that what we have
> right now (minus the deal operation) is pretty similar to Schneier's
> Solitaire cipher. His does a triple cut around a pair of jokers, then
> a count cut based on the bottom card. We do a triple cut (or some
> variation there-of) around the input value, and then a count cut based
> on the top card (then another triple-cut). Solitaire, relies on the
> deck being initially arranged into a random permutation,

The initial arrangement is his key, isn't it? I already proposed the
initial arrangement as a (the weaker) part of out key.

There's one very nice thing about the triple cut: You can't overlook
an instance since you must find both of them.

> J.D., what testing are you doing/planning? You're not trying to run
> these algorithms by hand yet, are you? To get worthwhile test results
> we need to run a ton of trials and collect statistics on each one. I
> think this is much better done by computer.

AFAIK, he's going to evaluate the speed and the error-rate, which must
be done by hand. He's also going to do stats, which must be done by
computer.

> I'm thinking of moving back away from the two-instances concept. It
> works better for mixing, but it also removes a bit from every input
> (since the results are the same for two different inputs). With a full
> 52-card deck it's not such a big deal, but I'd like to leave the
> algorithm open to the possibility of using smaller decks. (For one
> thing, arranging 52 cards in order takes a little more than twice as
> long as arranging 26 cards in order).

My algorithm wothwith any number of instances.

> Obviously everything will be
> weakened using a smaller deck, there's no getting around that, but I
> think a 26-card deck should still provide reasonable security. If we
> use the two instance concept, that's only 13 different inputs, so only
> 3.7 bits per input. At that rate, a 12-value input (i.e., the master
> password) only gives about 44 bits, which I think is a bit weak for
> the master password. With 26 inputs, a 12-value input gives 56 bits,
> which is quite a bit more reasonable.
>
> Is there an easy way we can use two-instances, but have them not be
> equivalent? In Solitaire, you move the A joker by one card, and the B
> joker by two cards, before doing the triple cut, but I'm not crazy
> about that. I've run Solitaire a few times and that part's a bit of a
> burden, and slightly error prone (especially when the jokers are near
> each other or at the end of the deck). One idea is that if you find
> the B instance (i.e., the "complimentary" card)

You mean the other card of the same rank and different color in case
of 26 cards, right?

> first, just move it
> and the cards before it to the end of the deck, like this:
> [ h ] B [ i ] A [ t ] --> [ i ] A [ t ] B [ h ]
> Then you can just continue through from there with the regular shuffle
> operation (what ever it may be, currently I think we left off with
> Maaartin's triple cut).
>
> So let's say the deck looks like this:
> [ h ] X [ i ] Y [ t ]
> Where E and F are complimentary cards.
>
> If we feed in X, we just apply the triple cut as is, so we get:
> [ t ] Y [ i ] X [ h ]
>
> But if we feed in Y, we would find the X first, and move it to the
> end:
> [ i ] Y [ t ] X [ h ]
> Then apply the triple cut from there and end up with
> [ h ] X [ t ] Y [ i ]
>
> So complimentary cards are no longer equivalent, but we can still use
> the triple cut which I think is better than a single cut.

But is it also better than two single cuts?

> The one
> thing I notice is that the the second card (where the compliment is
> found first) leaves the head and the input card ( [ h ] X ) in-
> position. Assuming that happens 50% of the time, and in conjunction
> with our bury and second shuffle, maybe that's ok. If either of you
> have a better way to handle this, I'm all ears.

There's only one "ideal" permutation: t i h. All others either leave
something in place or leave neigbouring part together. Maybe we could
do always the "ideal" operation, and choose the following one accrding
to what instance was found first. The two operations to choose from
could be my original bury and the shape-based bury form this posting.

> Alright, here's a description of my current preference for the
> algorithm. I'm doing away with the deal until the output phase.
>
> In a deck of length 2K, feed input value X as follows:
>
> 1) Let Y be the compliment of X. If X >= K, then Y = X % K. Otherwise,
> Y = K+X. (So if you're using two different colors/suits, it's just the
> same rank card in the opposite color).

You mean abs(X-Y) = 13, right?

> 2) Perform a shuffle/triple-cut:
> 2a) Search through the deck from the top, looking for X or Y.
> 2b) If Y is found first, move Y to the end of the deck, then move
> all the cards that came before Y to the end of the deck:
> [ h ] Y [ t ] --> [ t ] Y [ h ]
> 2c) When you find X, perform a triple cut with X and Y:
> 2c.1) Collect all the cards that came before X into the
> opposite hand, then move X on top of that deck:
> [ h ] X [ i ] Y [ t ] --> lefthand( [ i ] Y
> [ t ] ) righthand( X [ h ] )

Pls, break your lines at some logical place, this is quite unreadable.

> 2c.2) Continue stepping down through what's left of the
> original deck until you find Y. When you find it, move the cards that
> came before it to the top of the other deck, then move Y on top of
> that:
> lefthand( [ i ] Y [ t ] ) righthand( X [ h ] ) --
> > lefthand( [ t ] ) righthand( Y [ i ] X [ h ] )
>
> 2c.3) Finally, put whats left of the original deck (lefthand)
> on top of the other deck.
> lefthand( [ t ] ) righthand( Y [ i ] X [ h ] ) --
>
> > [ t ] Y [ i ] X [ h ]
>
> (Less explicitly but more intuitively, you're just moving
> blocks of cards from the left hand to the right, where the blocks are
> separated by X and Y
>
> 3) Perform a count-cut/bury:
> 3a) Look at the top card, convert it to a value in [1, K] (simply
> the card's rank).
> 3b) Including that first card, count that many cards off the top
> of the deck, and move them to the bottom of the deck.
> E [ ... <E-1 cards> ... ] [ t ] --> [ t ] E [ ... <E-1
> cards> ... ]
>
> (Note this step is not strictly reversible without knowing the
> previous state of the deck. However, there will typically only be a
> few possible ways to reverse it. I.e., start counting from the bottom
> of the deck. Any card that is equal to the count is a possibility. Any
> card that is not equal to the count is not a possibility.)

I don't think this is negligible, but I don't know the formula for how
much bits you loose.

> 4) Perform a shuffle/triple-cut:
> 4a) Repeat step 2.
>
> After feeding all inputs, produce the output digest as follows:
>
> 1) Look at the top card, convert it to a value in [1, K] (simply the
> card's rank).
> 2) Including that first card, count that many cards off the top of the
> deck.
> 3) The next card is your output value. Move this to the bottom of the
> deck, then move all the cards that were above it to the bottom of the
> deck.
> E [ ... <E-1 cards> ... ] D [ t ] -->
> [ t ] D E [ ... <E-1 cards> ... ] output is D
> 4) Repeat steps 1-3 until sufficient number of outputs are generated.
>
> This is almost identical to the method Maaartin suggested earlier, but
> it modifies the deck at each pass, so I would /expect/ it not to loop
> very quickly.

Neither I did expect loops in my original output routine. Let's try it
out.

> If it does, then we can go back to Maaartin's other
> suggestion of dropping the output card when it's selected. This
> slightly reduces the information density of the output,

Does it? It only prevents you from outputting more then 26 (or 52)
cards. And the output is not a permutation, i.e., you can output more
information than you have (this is a non-sense, but you understand
what I mean, do you?).

> but if it
> stops looping it's obviously worth it, and we should still be able to
> get a good enough password from it: 26 permute 12 is still 52 bits.

I don't understand it.
From: Maaartin on
On Apr 28, 5:11 pm, bmearns <mearn...(a)gmail.com> wrote:
> It does, thanks. It makes sense now and seems pretty powerful. Maybe
> I'm being too paranoid/romantic about the dealing issue; I just don't
> like having to put cards down. Even face down, it's a potential
> security issue (easier for someone to grab, harder for you to pick up
> and shuffle if disturbed), but there's always the case where you don't
> have a convenient work surface. With a two-part deal you can do it in-
> hand, but it's a bit wonky. Basically you would transfer the deck from
> one hand to the other. For a (1,2) deal, you would deal 1 card from
> the left hand to the top of the deck in your right hand, then 2 from
> the left hand to the bottom of the right-hand deck, then back to the
> top with 1, etc.

When dealing on the desk, then deal(2, 3) or alike is much faster than
deal (1, 2). How is it in hand?

I've got an idea: When dealing, you must go through all the cards.
When searching an instance of a card (or both instances), you go throu
many of them. So we could try to join the steps in e.g., some thing
like this:

1. Take 2 (variant: or 3) cards from the top, look at them
2. If the searched card is not there, put them to the bottom (variant:
to another deck) and goto 1
3. Otherwise, start another deck. This way you could perform something
like the double or the riple cat, too.

> I've got another idea for dealing in-hand, but I'm not sure how it
> compares to the existing method. It's really just a repeated
> application of Maaartin's bury technique, with a minor tweak. So you
> look at the top card, convert it to a number [1-13] (just the card's
> numerical value, 11, 12, 13 for J, Q, K), and count that many cards
> down. Take those top cards, /flip them over/ and move them to the
> bottom of the deck. Repeat until you hit the first upside down card,
> then flip the deck back over.
>
> In python (with apologies to FDR, and using numerical decks with ace
> represented by 0):

What is FDR?

> ############################
> def new_deal(deck):
> results = []
> while len(deck) > 0:
> n = (deck[0] % 13) + 1
> top = deck[0:n]
> results = top + results
> deck = deck[n:]
> return results
> ############################

This tranformation (unlike bury) is not bijective, as the position of
the originally top card depends on its value. This may be a good thing
(Merkle-Damgard step is not bijective, too), but it may lead to
information loss, which may be substantiel (since we have a
relativelly small state and this operation gets repeated many times).

> It's not nearly as thorough of a deal as the other routine, but it is
> pretty fast and easy to do (I've actually tested it with cards, this
> time). I think it we have a strong output function like Maaartin said,
> this might be sufficient.

It can be done differently using other transformation to numbers:
clubs=1, diamonds=2, hearts=3, spades=4. This leads to destroying the
sequence more.

> A sample run with an ordered deck:
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
> 24 25
> Becomes:
> 24 25 18 19 20 21 22 23 15 16 17 7 8 9 10 11 12 13 14 3 4 5 6
> 1 2 0
>
> So you can see it keeps a lot of sequences, much worse than Maaartin's
> deal function. But with some other strong components, it might be
> sufficient.

I agree.

> On Apr 27, 10:13 pm, Maaartin <grajc...(a)seznam.cz> wrote:
> > So why not
> > [ t ] B [ i ] A [ h ]
> > ? It keeps the distance between the two instances, but this distance
> > changes every time, when exactly one instance of another letter falls
> > in between them, which is quite often.
>
> Yes, as long as there is some other operation between before the next
> time it is performed (as there currently is, the bury operation), then
> that should be better. If there was no operation in between, then
> feeding the same value twice in a row would just revert it back to
> where it was.

I think we should combine a lot of simple operations, so there always
will be one in between in all my algorithms.

> > > If we use your three part algorithm (shuffle, bury,
> > > shuffle), and then use a variation of the bury routine to choose the
> > > outputs as we previously discussed, I expect we can get good results,
> > > though obviously we'll need to test it to be sure.
>
> > Maybe we could. Having a lot of pairs unbroken is no problem as long
> > as we use a proper output routine.
>
> This is the direction I'm leaning towards right now. If we can make
> each feed nice and quick and simple, and then reinforce it with a good
> output function, I think it would make the whole system much more
> usable.

So do I. I'll have another look at my output routine soon.

> One thing that might be worth point out is that what we have
> right now (minus the deal operation) is pretty similar to Schneier's
> Solitaire cipher. His does a triple cut around a pair of jokers, then
> a count cut based on the bottom card. We do a triple cut (or some
> variation there-of) around the input value, and then a count cut based
> on the top card (then another triple-cut). Solitaire, relies on the
> deck being initially arranged into a random permutation,

The initial arrangement is his key, isn't it? I already proposed the
initial arrangement as a (the weaker) part of out key.

There's one very nice thing about the triple cut: You can't overlook
an instance since you must find both of them.

> J.D., what testing are you doing/planning? You're not trying to run
> these algorithms by hand yet, are you? To get worthwhile test results
> we need to run a ton of trials and collect statistics on each one. I
> think this is much better done by computer.

AFAIK, he's going to evaluate the speed and the error-rate, which must
be done by hand. He's also going to do stats, which must be done by
computer.

> I'm thinking of moving back away from the two-instances concept. It
> works better for mixing, but it also removes a bit from every input
> (since the results are the same for two different inputs). With a full
> 52-card deck it's not such a big deal, but I'd like to leave the
> algorithm open to the possibility of using smaller decks. (For one
> thing, arranging 52 cards in order takes a little more than twice as
> long as arranging 26 cards in order).

My algorithm wothwith any number of instances.

> Obviously everything will be
> weakened using a smaller deck, there's no getting around that, but I
> think a 26-card deck should still provide reasonable security. If we
> use the two instance concept, that's only 13 different inputs, so only
> 3.7 bits per input. At that rate, a 12-value input (i.e., the master
> password) only gives about 44 bits, which I think is a bit weak for
> the master password. With 26 inputs, a 12-value input gives 56 bits,
> which is quite a bit more reasonable.
>
> Is there an easy way we can use two-instances, but have them not be
> equivalent? In Solitaire, you move the A joker by one card, and the B
> joker by two cards, before doing the triple cut, but I'm not crazy
> about that. I've run Solitaire a few times and that part's a bit of a
> burden, and slightly error prone (especially when the jokers are near
> each other or at the end of the deck). One idea is that if you find
> the B instance (i.e., the "complimentary" card)

You mean the other card of the same rank and different color in case
of 26 cards, right?

> first, just move it
> and the cards before it to the end of the deck, like this:
> [ h ] B [ i ] A [ t ] --> [ i ] A [ t ] B [ h ]
> Then you can just continue through from there with the regular shuffle
> operation (what ever it may be, currently I think we left off with
> Maaartin's triple cut).
>
> So let's say the deck looks like this:
> [ h ] X [ i ] Y [ t ]
> Where E and F are complimentary cards.
>
> If we feed in X, we just apply the triple cut as is, so we get:
> [ t ] Y [ i ] X [ h ]
>
> But if we feed in Y, we would find the X first, and move it to the
> end:
> [ i ] Y [ t ] X [ h ]
> Then apply the triple cut from there and end up with
> [ h ] X [ t ] Y [ i ]
>
> So complimentary cards are no longer equivalent, but we can still use
> the triple cut which I think is better than a single cut.

But is it also better than two single cuts?

> The one
> thing I notice is that the the second card (where the compliment is
> found first) leaves the head and the input card ( [ h ] X ) in-
> position. Assuming that happens 50% of the time, and in conjunction
> with our bury and second shuffle, maybe that's ok. If either of you
> have a better way to handle this, I'm all ears.

There's only one "ideal" permutation: t i h. All others either leave
something in place or leave neigbouring part together. Maybe we could
do always the "ideal" operation, and choose the following one accrding
to what instance was found first. The two operations to choose from
could be my original bury and the shape-based bury form this posting.

> Alright, here's a description of my current preference for the
> algorithm. I'm doing away with the deal until the output phase.
>
> In a deck of length 2K, feed input value X as follows:
>
> 1) Let Y be the compliment of X. If X >= K, then Y = X % K. Otherwise,
> Y = K+X. (So if you're using two different colors/suits, it's just the
> same rank card in the opposite color).

You mean abs(X-Y) = 13, right?

> 2) Perform a shuffle/triple-cut:
> 2a) Search through the deck from the top, looking for X or Y.
> 2b) If Y is found first, move Y to the end of the deck, then move
> all the cards that came before Y to the end of the deck:
> [ h ] Y [ t ] --> [ t ] Y [ h ]
> 2c) When you find X, perform a triple cut with X and Y:
> 2c.1) Collect all the cards that came before X into the
> opposite hand, then move X on top of that deck:
> [ h ] X [ i ] Y [ t ] --> lefthand( [ i ] Y
> [ t ] ) righthand( X [ h ] )

Pls, break your lines at some logical place, this is quite unreadable.

> 2c.2) Continue stepping down through what's left of the
> original deck until you find Y. When you find it, move the cards that
> came before it to the top of the other deck, then move Y on top of
> that:
> lefthand( [ i ] Y [ t ] ) righthand( X [ h ] ) --
> > lefthand( [ t ] ) righthand( Y [ i ] X [ h ] )
>
> 2c.3) Finally, put whats left of the original deck (lefthand)
> on top of the other deck.
> lefthand( [ t ] ) righthand( Y [ i ] X [ h ] ) --
>
> > [ t ] Y [ i ] X [ h ]
>
> (Less explicitly but more intuitively, you're just moving
> blocks of cards from the left hand to the right, where the blocks are
> separated by X and Y
>
> 3) Perform a count-cut/bury:
> 3a) Look at the top card, convert it to a value in [1, K] (simply
> the card's rank).
> 3b) Including that first card, count that many cards off the top
> of the deck, and move them to the bottom of the deck.
> E [ ... <E-1 cards> ... ] [ t ] --> [ t ] E [ ... <E-1
> cards> ... ]
>
> (Note this step is not strictly reversible without knowing the
> previous state of the deck. However, there will typically only be a
> few possible ways to reverse it. I.e., start counting from the bottom
> of the deck. Any card that is equal to the count is a possibility. Any
> card that is not equal to the count is not a possibility.)

I don't think this is negligible, but I don't know the formula for how
much bits you loose.

> 4) Perform a shuffle/triple-cut:
> 4a) Repeat step 2.
>
> After feeding all inputs, produce the output digest as follows:
>
> 1) Look at the top card, convert it to a value in [1, K] (simply the
> card's rank).
> 2) Including that first card, count that many cards off the top of the
> deck.
> 3) The next card is your output value. Move this to the bottom of the
> deck, then move all the cards that were above it to the bottom of the
> deck.
> E [ ... <E-1 cards> ... ] D [ t ] -->
> [ t ] D E [ ... <E-1 cards> ... ] output is D
> 4) Repeat steps 1-3 until sufficient number of outputs are generated.
>
> This is almost identical to the method Maaartin suggested earlier, but
> it modifies the deck at each pass, so I would /expect/ it not to loop
> very quickly.

Neither I did expect loops in my original output routine. Let's try it
out.

> If it does, then we can go back to Maaartin's other
> suggestion of dropping the output card when it's selected. This
> slightly reduces the information density of the output,

Does it? It only prevents you from outputting more then 26 (or 52)
cards. And the output is not a permutation, i.e., you can output more
information than you have (this is a non-sense, but you understand
what I mean, do you?).

> but if it
> stops looping it's obviously worth it, and we should still be able to
> get a good enough password from it: 26 permute 12 is still 52 bits.

I don't understand this.
From: Maaartin on
On Apr 28, 5:11 pm, bmearns <mearn...(a)gmail.com> wrote:
> It does, thanks. It makes sense now and seems pretty powerful. Maybe
> I'm being too paranoid/romantic about the dealing issue; I just don't
> like having to put cards down. Even face down, it's a potential
> security issue (easier for someone to grab, harder for you to pick up
> and shuffle if disturbed), but there's always the case where you don't
> have a convenient work surface. With a two-part deal you can do it in-
> hand, but it's a bit wonky. Basically you would transfer the deck from
> one hand to the other. For a (1,2) deal, you would deal 1 card from
> the left hand to the top of the deck in your right hand, then 2 from
> the left hand to the bottom of the right-hand deck, then back to the
> top with 1, etc.

When dealing on the desk, then deal(2, 3) or alike is much faster than
deal (1, 2). How is it in hand?

I've got an idea: When dealing, you must go through all the cards.
When searching an instance of a card (or both instances), you go throu
many of them. So we could try to join the steps in e.g., some thing
like this:

1. Take 2 (variant: or 3) cards from the top, look at them
2. If the searched card is not there, put them to the bottom (variant:
to another deck) and goto 1
3. Otherwise, start another deck. This way you could perform something
like the double or the riple cat, too.

> I've got another idea for dealing in-hand, but I'm not sure how it
> compares to the existing method. It's really just a repeated
> application of Maaartin's bury technique, with a minor tweak. So you
> look at the top card, convert it to a number [1-13] (just the card's
> numerical value, 11, 12, 13 for J, Q, K), and count that many cards
> down. Take those top cards, /flip them over/ and move them to the
> bottom of the deck. Repeat until you hit the first upside down card,
> then flip the deck back over.
>
> In python (with apologies to FDR, and using numerical decks with ace
> represented by 0):

What is FDR?

> ############################
> def new_deal(deck):
> results = []
> while len(deck) > 0:
> n = (deck[0] % 13) + 1
> top = deck[0:n]
> results = top + results
> deck = deck[n:]
> return results
> ############################

This tranformation (unlike bury) is not bijective, as the position of
the originally top card depends on its value. This may be a good thing
(Merkle-Damgard step is not bijective, too), but it may lead to
information loss, which may be substantiel (since we have a
relativelly small state and this operation gets repeated many times).

> It's not nearly as thorough of a deal as the other routine, but it is
> pretty fast and easy to do (I've actually tested it with cards, this
> time). I think it we have a strong output function like Maaartin said,
> this might be sufficient.

It can be done differently using other transformation to numbers:
clubs=1, diamonds=2, hearts=3, spades=4. This leads to destroying the
sequence more.

> A sample run with an ordered deck:
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
> 24 25
> Becomes:
> 24 25 18 19 20 21 22 23 15 16 17 7 8 9 10 11 12 13 14 3 4 5 6
> 1 2 0
>
> So you can see it keeps a lot of sequences, much worse than Maaartin's
> deal function. But with some other strong components, it might be
> sufficient.

I agree.

> On Apr 27, 10:13 pm, Maaartin <grajc...(a)seznam.cz> wrote:
> > So why not
> > [ t ] B [ i ] A [ h ]
> > ? It keeps the distance between the two instances, but this distance
> > changes every time, when exactly one instance of another letter falls
> > in between them, which is quite often.
>
> Yes, as long as there is some other operation between before the next
> time it is performed (as there currently is, the bury operation), then
> that should be better. If there was no operation in between, then
> feeding the same value twice in a row would just revert it back to
> where it was.

I think we should combine a lot of simple operations, so there always
will be one in between in all my algorithms.

> > > If we use your three part algorithm (shuffle, bury,
> > > shuffle), and then use a variation of the bury routine to choose the
> > > outputs as we previously discussed, I expect we can get good results,
> > > though obviously we'll need to test it to be sure.
>
> > Maybe we could. Having a lot of pairs unbroken is no problem as long
> > as we use a proper output routine.
>
> This is the direction I'm leaning towards right now. If we can make
> each feed nice and quick and simple, and then reinforce it with a good
> output function, I think it would make the whole system much more
> usable.

So do I. I'll have another look at my output routine soon.

> One thing that might be worth point out is that what we have
> right now (minus the deal operation) is pretty similar to Schneier's
> Solitaire cipher. His does a triple cut around a pair of jokers, then
> a count cut based on the bottom card. We do a triple cut (or some
> variation there-of) around the input value, and then a count cut based
> on the top card (then another triple-cut). Solitaire, relies on the
> deck being initially arranged into a random permutation,

The initial arrangement is his key, isn't it? I already proposed the
initial arrangement as a (the weaker) part of out key.

There's one very nice thing about the triple cut: You can't overlook
an instance since you must find both of them.

> J.D., what testing are you doing/planning? You're not trying to run
> these algorithms by hand yet, are you? To get worthwhile test results
> we need to run a ton of trials and collect statistics on each one. I
> think this is much better done by computer.

AFAIK, he's going to evaluate the speed and the error-rate, which must
be done by hand. He's also going to do stats, which must be done by
computer.

> I'm thinking of moving back away from the two-instances concept. It
> works better for mixing, but it also removes a bit from every input
> (since the results are the same for two different inputs). With a full
> 52-card deck it's not such a big deal, but I'd like to leave the
> algorithm open to the possibility of using smaller decks. (For one
> thing, arranging 52 cards in order takes a little more than twice as
> long as arranging 26 cards in order).

My algorithm wothwith any number of instances.

> Obviously everything will be
> weakened using a smaller deck, there's no getting around that, but I
> think a 26-card deck should still provide reasonable security. If we
> use the two instance concept, that's only 13 different inputs, so only
> 3.7 bits per input. At that rate, a 12-value input (i.e., the master
> password) only gives about 44 bits, which I think is a bit weak for
> the master password. With 26 inputs, a 12-value input gives 56 bits,
> which is quite a bit more reasonable.
>
> Is there an easy way we can use two-instances, but have them not be
> equivalent? In Solitaire, you move the A joker by one card, and the B
> joker by two cards, before doing the triple cut, but I'm not crazy
> about that. I've run Solitaire a few times and that part's a bit of a
> burden, and slightly error prone (especially when the jokers are near
> each other or at the end of the deck). One idea is that if you find
> the B instance (i.e., the "complimentary" card)

You mean the other card of the same rank and different color in case
of 26 cards, right?

> first, just move it
> and the cards before it to the end of the deck, like this:
> [ h ] B [ i ] A [ t ] --> [ i ] A [ t ] B [ h ]
> Then you can just continue through from there with the regular shuffle
> operation (what ever it may be, currently I think we left off with
> Maaartin's triple cut).
>
> So let's say the deck looks like this:
> [ h ] X [ i ] Y [ t ]
> Where E and F are complimentary cards.
>
> If we feed in X, we just apply the triple cut as is, so we get:
> [ t ] Y [ i ] X [ h ]
>
> But if we feed in Y, we would find the X first, and move it to the
> end:
> [ i ] Y [ t ] X [ h ]
> Then apply the triple cut from there and end up with
> [ h ] X [ t ] Y [ i ]
>
> So complimentary cards are no longer equivalent, but we can still use
> the triple cut which I think is better than a single cut.

But is it also better than two single cuts?

> The one
> thing I notice is that the the second card (where the compliment is
> found first) leaves the head and the input card ( [ h ] X ) in-
> position. Assuming that happens 50% of the time, and in conjunction
> with our bury and second shuffle, maybe that's ok. If either of you
> have a better way to handle this, I'm all ears.

There's only one "ideal" permutation: t i h. All others either leave
something in place or leave neigbouring part together. Maybe we could
do always the "ideal" operation, and choose the following one accrding
to what instance was found first. The two operations to choose from
could be my original bury and the shape-based bury form this posting.

> Alright, here's a description of my current preference for the
> algorithm. I'm doing away with the deal until the output phase.
>
> In a deck of length 2K, feed input value X as follows:
>
> 1) Let Y be the compliment of X. If X >= K, then Y = X % K. Otherwise,
> Y = K+X. (So if you're using two different colors/suits, it's just the
> same rank card in the opposite color).

You mean abs(X-Y) = 13, right?

> 2) Perform a shuffle/triple-cut:
> 2a) Search through the deck from the top, looking for X or Y.
> 2b) If Y is found first, move Y to the end of the deck, then move
> all the cards that came before Y to the end of the deck:
> [ h ] Y [ t ] --> [ t ] Y [ h ]
> 2c) When you find X, perform a triple cut with X and Y:
> 2c.1) Collect all the cards that came before X into the
> opposite hand, then move X on top of that deck:
> [ h ] X [ i ] Y [ t ] --> lefthand( [ i ] Y
> [ t ] ) righthand( X [ h ] )

Pls, break your lines at some logical place, this is quite unreadable.

> 2c.2) Continue stepping down through what's left of the
> original deck until you find Y. When you find it, move the cards that
> came before it to the top of the other deck, then move Y on top of
> that:
> lefthand( [ i ] Y [ t ] ) righthand( X [ h ] ) --
> > lefthand( [ t ] ) righthand( Y [ i ] X [ h ] )
>
> 2c.3) Finally, put whats left of the original deck (lefthand)
> on top of the other deck.
> lefthand( [ t ] ) righthand( Y [ i ] X [ h ] ) --
>
> > [ t ] Y [ i ] X [ h ]
>
> (Less explicitly but more intuitively, you're just moving
> blocks of cards from the left hand to the right, where the blocks are
> separated by X and Y
>
> 3) Perform a count-cut/bury:
> 3a) Look at the top card, convert it to a value in [1, K] (simply
> the card's rank).
> 3b) Including that first card, count that many cards off the top
> of the deck, and move them to the bottom of the deck.
> E [ ... <E-1 cards> ... ] [ t ] --> [ t ] E [ ... <E-1
> cards> ... ]
>
> (Note this step is not strictly reversible without knowing the
> previous state of the deck. However, there will typically only be a
> few possible ways to reverse it. I.e., start counting from the bottom
> of the deck. Any card that is equal to the count is a possibility. Any
> card that is not equal to the count is not a possibility.)

I don't think this is negligible, but I don't know the formula for how
much bits you loose.

> 4) Perform a shuffle/triple-cut:
> 4a) Repeat step 2.
>
> After feeding all inputs, produce the output digest as follows:
>
> 1) Look at the top card, convert it to a value in [1, K] (simply the
> card's rank).
> 2) Including that first card, count that many cards off the top of the
> deck.
> 3) The next card is your output value. Move this to the bottom of the
> deck, then move all the cards that were above it to the bottom of the
> deck.
> E [ ... <E-1 cards> ... ] D [ t ] -->
> [ t ] D E [ ... <E-1 cards> ... ] output is D
> 4) Repeat steps 1-3 until sufficient number of outputs are generated.
>
> This is almost identical to the method Maaartin suggested earlier, but
> it modifies the deck at each pass, so I would /expect/ it not to loop
> very quickly.

Neither I did expect loops in my original output routine. Let's try it
out.

> If it does, then we can go back to Maaartin's other
> suggestion of dropping the output card when it's selected. This
> slightly reduces the information density of the output,

Does it? It only prevents you from outputting more then 26 (or 52)
cards. And the output is not a permutation, i.e., you can output more
information than you have (this is a non-sense, but you understand
what I mean, do you?).

> but if it
> stops looping it's obviously worth it, and we should still be able to
> get a good enough password from it: 26 permute 12 is still 52 bits.

I don't understand this.
From: Maaartin on
SORRY FOR MULTIPOSTING. My posting didn't appear for a long time.

On Apr 28, 7:00 pm, bmearns <mearn...(a)gmail.com> wrote:
> Couple bad results so far (using the description at the end of my last
> post). First, the output function I described still loops a lot. So I
> went with dropping the output card from the deck, which slightly
> reduces the density and obviously sets a limit on the length of the
> output, but I think it's still sufficient.

So do I. And you could collect the thrown away cards (instead of
burning them) and use them as you new deck when you deck gets empty.

> More importantly, it's not cascading well. I only fed in 5 characters,
> and changing the first one only changed the last output. Guess it's
> back to the drawing board. Maybe it's not possible to get good
> performance without some sort of dealing. I'll add that and see how it
> goes.

That's very strange. I'd suppose that even without dealing, most
output cards must depend on most inputs, they may be correlated, but
this shouldn't happen.
From: bmearns on
On Apr 28, 1:14 pm, Maaartin <grajc...(a)seznam.cz> wrote:
> On Apr 28, 5:11 pm, bmearns <mearn...(a)gmail.com> wrote:
>
> > It does, thanks. It makes sense now and seems pretty powerful. Maybe
> > I'm being too paranoid/romantic about the dealing issue; I just don't
> > like having to put cards down. Even face down, it's a potential
> > security issue (easier for someone to grab, harder for you to pick up
> > and shuffle if disturbed), but there's always the case where you don't
> > have a convenient work surface. With a two-part deal you can do it in-
> > hand, but it's a bit wonky. Basically you would transfer the deck from
> > one hand to the other. For a (1,2) deal, you would deal 1 card from
> > the left hand to the top of the deck in your right hand, then 2 from
> > the left hand to the bottom of the right-hand deck, then back to the
> > top with 1, etc.
>
> When dealing on the desk, then deal(2, 3) or alike is much faster than
> deal (1, 2). How is it in hand?

I'll give it a try when I can.



> I've got an idea: When dealing, you must go through all the cards.
> When searching an instance of a card (or both instances), you go throu
> many of them. So we could try to join the steps in e.g., some thing
> like this:
>
> 1. Take 2 (variant: or 3) cards from the top, look at them
> 2. If the searched card is not there, put them to the bottom (variant:
> to another deck) and goto 1
> 3. Otherwise, start another deck. This way you could perform something
> like the double or the riple cat, too.

I like the idea, but I'm a little fuzzy on the last part. What are you
doing when you find the searched card?


>
> > I've got another idea for dealing in-hand, but I'm not sure how it
> > compares to the existing method. It's really just a repeated
> > application of Maaartin's bury technique, with a minor tweak. So you
> > look at the top card, convert it to a number [1-13] (just the card's
> > numerical value, 11, 12, 13 for J, Q, K), and count that many cards
> > down. Take those top cards, /flip them over/ and move them to the
> > bottom of the deck. Repeat until you hit the first upside down card,
> > then flip the deck back over.
>
> > In python (with apologies to FDR, and using numerical decks with ace
> > represented by 0):
>
> What is FDR?

Oh, sorry, FDR refers to Franklin Delano Roosevelt. He was president
in the US in the 1930's and 1940's. He's famous for an economic
revitalization program called "the new deal".

>
> > ############################
> > def new_deal(deck):
> > results = []
> > while len(deck) > 0:
> > n = (deck[0] % 13) + 1
> > top = deck[0:n]
> > results = top + results
> > deck = deck[n:]
> > return results
> > ############################
>
> This tranformation (unlike bury) is not bijective, as the position of
> the originally top card depends on its value. This may be a good thing
> (Merkle-Damgard step is not bijective, too), but it may lead to
> information loss, which may be substantiel (since we have a
> relativelly small state and this operation gets repeated many times).

You mean there are multiple initial deck arrangements that would lead
to the same output? I'm not seeing how that would be the case.

>
> > It's not nearly as thorough of a deal as the other routine, but it is
> > pretty fast and easy to do (I've actually tested it with cards, this
> > time). I think it we have a strong output function like Maaartin said,
> > this might be sufficient.
>
> It can be done differently using other transformation to numbers:
> clubs=1, diamonds=2, hearts=3, spades=4. This leads to destroying the
> sequence more.

Nice spotting, that could improve things considerably.


[snip]
> I think we should combine a lot of simple operations, so there always
> will be one in between in all my algorithms.

Agreed.

[snip]
> > One thing that might be worth point out is that what we have
> > right now (minus the deal operation) is pretty similar to Schneier's
> > Solitaire cipher. His does a triple cut around a pair of jokers, then
> > a count cut based on the bottom card. We do a triple cut (or some
> > variation there-of) around the input value, and then a count cut based
> > on the top card (then another triple-cut). Solitaire, relies on the
> > deck being initially arranged into a random permutation,
>
> The initial arrangement is his key, isn't it? I already proposed the
> initial arrangement as a (the weaker) part of out key.

That's right, it's the key in Solitaire. It would actually be a very
strong key if it was a full random permutation of the entire deck, but
remembering it as your password is certainly no easy feat.


[snip]
> AFAIK, he's going to evaluate the speed and the error-rate, which must
> be done by hand. He's also going to do stats, which must be done by
> computer.

Ok, I understand.


[snip]
> > Is there an easy way we can use two-instances, but have them not be
> > equivalent? In Solitaire, you move the A joker by one card, and the B
> > joker by two cards, before doing the triple cut, but I'm not crazy
> > about that. I've run Solitaire a few times and that part's a bit of a
> > burden, and slightly error prone (especially when the jokers are near
> > each other or at the end of the deck). One idea is that if you find
> > the B instance (i.e., the "complimentary" card)
>
> You mean the other card of the same rank and different color in case
> of 26 cards, right?

That's right. For a 52-card deck it could be the other card of the
same rank and color (but different suit).


[snip]
> > So complimentary cards are no longer equivalent, but we can still use
> > the triple cut which I think is better than a single cut.
>
> But is it also better than two single cuts?

I supposed it depends on how you do those two single cuts...

>
> > The one
> > thing I notice is that the the second card (where the compliment is
> > found first) leaves the head and the input card ( [ h ] X ) in-
> > position. Assuming that happens 50% of the time, and in conjunction
> > with our bury and second shuffle, maybe that's ok. If either of you
> > have a better way to handle this, I'm all ears.
>
> There's only one "ideal" permutation: t i h. All others either leave
> something in place or leave neigbouring part together. Maybe we could
> do always the "ideal" operation, and choose the following one accrding
> to what instance was found first. The two operations to choose from
> could be my original bury and the shape-based bury form this posting.
>
> > Alright, here's a description of my current preference for the
> > algorithm. I'm doing away with the deal until the output phase.
>
> > In a deck of length 2K, feed input value X as follows:
>
> > 1) Let Y be the compliment of X. If X >= K, then Y = X % K. Otherwise,
> > Y = K+X. (So if you're using two different colors/suits, it's just the
> > same rank card in the opposite color).
>
> You mean abs(X-Y) = 13, right?

Yes, that would be an assertion for a 26-card deck. This was just
meant to be a formal description of the "complimentary card" concept I
described above. For a 26-card deck, the complimentary card is the
same rank but different color. For 52-card deck, it's the same rank
and color, but different suit (at least it could be if the suits were
ordered appropriately). This just generalizes the concept to arbitrary
sequences of numbers.

[snip]
> Pls, break your lines at some logical place, this is quite unreadable.

Sorry about that, I forgot that the client would split the lines.


[snip: re: count-cut]
> >     (Note this step is not strictly reversible without knowing the
> > previous state of the deck. However, there will typically only be a
> > few possible ways to reverse it. I.e., start counting from the bottom
> > of the deck. Any card that is equal to the count is a possibility. Any
> > card that is not equal to the count is not a possibility.)
>
> I don't think this is negligible, but I don't know the formula for how
> much bits you loose.

I don't think our deck looses any information, per se, I just meant to
point out that the deck is quite vulnerable at this point: were an
attacker to get their hands on it at this point in the algorithm, it
would not be difficult for them to reverse it, at least to one of just
a handful of possibilities. More of just an observation than a
practical concern. Something to keep in mind if, for instance, we had
the idea to apply this in the final step for some reason.

[snip]
> > This is almost identical to the method Maaartin suggested earlier, but
> > it modifies the deck at each pass, so I would /expect/ it not to loop
> > very quickly.
>
> Neither I did expect loops in my original output routine. Let's try it
> out.

You're right of course. It loops far more easily than I could have
imagined if I hadn't seen it.


> > If it does, then we can go back to Maaartin's other
> > suggestion of dropping the output card when it's selected. This
> > slightly reduces the information density of the output,
>
> Does it? It only prevents you from outputting more then 26 (or 52)
> cards. And the output is not a permutation, i.e., you can output more
> information than you have (this is a non-sense, but you understand
> what I mean, do you?).
>
> > but if it
> > stops looping it's obviously worth it, and we should still be able to
> > get a good enough password from it: 26 permute 12 is still 52 bits.
>
> I don't understand this.

The output would be a permutation actually, just not of the whole set.
If you remove each card from the deck as you pick it, that means you
can't pick that card again. That means each sequential card chosen is
selected from a smaller set, so it encodes fewer bits.

For instance, if you choose 3 cards in this way (out of a deck of 26),
you have 26 choices for the first, 25 for the second, and 24 for the
third. The total number of possible outcomes is 26*25*24. More
generally, if you choose N outputs from 26, the total number of
possibilities is 26!/(26-N)!. If we had all 26 possibilities to choose
from equally for each selection, the total number would of course be
26^N (26 to the power N), which is strictly greater for N>1.

Even without dropping the card every time, we don't quite get 26^N
possible outcomes, because when a card is moved to the end, we won't
be able to select it in the next round. We may be able to select it in
the following round, but it's still unlikely, and so on for a few more
rounds. But it will still be no less than if we do drop the card.
That's moot, however, because the looping is inexcusable.

Cheers,
-Brian



First  |  Prev  |  Next  |  Last
Pages: 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Prev: Dynamic Hill cipher
Next: PE Scrambler