From: drscrypt on
Has someone implemented something like this where they map a long url
(which I guess could be any string in the general case) to a short
randomized string that is relatively easy to remember? The hashing must
be unique so that two different url's do not map to the same string.


DrS
From: lrem on
W po�cie <hiid6j$f06$1(a)speranza.aioe.org>,
drscrypt(a)gmail.com nabazgra�:
> Has someone implemented something like this where they map a long url
> (which I guess could be any string in the general case) to a short
> randomized string that is relatively easy to remember? The hashing must
> be unique so that two different url's do not map to the same string.

But you want to be able to go from the short url to ghe long one, don't you?
So this is no longer hashing. Furthermore you can just check whether you
have a copy of the short one in your database.
From: drscrypt on
lrem(a)localhost.localdomain wrote:
> But you want to be able to go from the short url to ghe long one, don't you?
> So this is no longer hashing. Furthermore you can just check whether you
> have a copy of the short one in your database.


Well, I thought that the mapping would be the more difficult part of the
task. But you are right - the short url will map to the original longer
one, and I would use the standard Http referral code. I'd like to avoid
multiple checks against the database if possible to avoid delays in the
process.


DrS
From: lrem on
W po�cie <hiidri$gb4$1(a)speranza.aioe.org>,
drscrypt(a)gmail.com nabazgra�:
> lrem(a)localhost.localdomain wrote:
>> But you want to be able to go from the short url to ghe long one, don't you?
>> So this is no longer hashing. Furthermore you can just check whether you
>> have a copy of the short one in your database.
>
>
> Well, I thought that the mapping would be the more difficult part of the
> task. But you are right - the short url will map to the original longer
> one, and I would use the standard Http referral code. I'd like to avoid
> multiple checks against the database if possible to avoid delays in the
> process.

Almost anything will do. If you just get an uniformly random string you'll
get collision probability equal to how much of your address space is exhausted.
If you don't care about people browsing other people's links you can just get
consecutive urls and get rid of collisions at all. Anyhow designing any complex
hashing scheme is a big overkill.
From: drscrypt on
lrem(a)localhost.localdomain wrote:
> Almost anything will do. If you just get an uniformly random string you'll
> get collision probability equal to how much of your address space is exhausted.
> If you don't care about people browsing other people's links you can just get
> consecutive urls and get rid of collisions at all. Anyhow designing any complex
> hashing scheme is a big overkill.


What would a simple one look like?


DrS