From: Boris Punk on
I have a Hashtable in-memory and want to sync updates to the Hashtable to
disk. There may be frequent updates to the Hashtable and I want to avoid
constant small update disk writes. Has anyone got any idea how to do this?

Thanks


From: Lew on
On 06/16/2010 02:12 AM, Boris Punk wrote:
> I have a Hashtable in-memory and want to sync updates to the Hashtable to
> disk. There may be frequent updates to the Hashtable and I want to avoid
> constant small update disk writes. Has anyone got any idea how to do this?

If you avoid writes when it updates, you won't be in synch. Why are you
worried about disk write frequency? Do you have measurements that indicate a
problem or are you just fantasizing that there will be one?

Don't use Hashtable, use another Map implementation.

--
Lew
From: Tom Anderson on
On Wed, 16 Jun 2010, Boris Punk wrote:

> I have a Hashtable in-memory and want to sync updates to the Hashtable
> to disk. There may be frequent updates to the Hashtable and I want to
> avoid constant small update disk writes. Has anyone got any idea how to
> do this?

Loads.

How do you want to store the hashtable?

Let's assume serialisation. Not tested, and obviously not ready for real
use:

public class MapDumper {
public static <K, V> Map<K, V> makeDumpingMap(Map<K, V> m, File file, long interval) {
Serializable s = (Serializable)m;
Map<K, V> sm = Collections.synchronizedMap(m);
new PeriodicDumper(s, sm, file, interval).start();
return sm;
}
}

public class PeriodicDumper implements Runnable {
private final Serializable obj;
private final Object lock;
private final File file;
private final long interval;
private volatile Thread t;

public PeriodicDumper(Serializable obj, Object lock, File file, long interval) {
this.obj = obj;
this.lock = lock;
this.file = file;
this.interval = interval;
}

public void run() {
while (t != null) {
try {
Thread.sleep(interval);
} catch (InterruptedException e) {
// just treat an interrupt as an early exit from the sleep
}
try {
dump();
} catch (IOException e) {
// do something
}
}
}

public void dump() throws IOException {
// go via a buffer to avoid doing IO while holding the lock
ByteArrayOutputStream buf = new ByteArrayOutputStream();
ObjectOutputStream oout = new ObjectOutputStream(buf);
synchronized (lock) {
oout.writeObject(obj);
oout.close();
}
OutputStream fout = new FileOutputStream(file);
try {
buf.writeTo(fout);
}
finally {
fout.close();
}
}

public void start() {
synchronized (this) {
if (t == null) {
t = new Thread(this);
}
t.setDaemon(true);
t.start();
}
}

public void stop() {
synchronized (this) {
if (t != null) {
Thread t = this.t;
this.t = null;
t.interrupt();
}
}
}
}

Also, if you could get access to the magic cookie inside the map used to
detect concurrent modifications, you could easily skip dumps when no
change has occurred.

You should do the dump a bit more cleverly than this, too, so you're never
in a state where the data on disk is incomplete. Dump to a second file,
then atomically rename over the first.

tom

--
In the long run, we are all dead. -- John Maynard Keynes
From: markspace on
Boris Punk wrote:
> I have a Hashtable in-memory and want to sync updates to the Hashtable to
> disk. There may be frequent updates to the Hashtable and I want to avoid
> constant small update disk writes.


I think I'd normally call that a "database" and not "hash table." Check
out some of the light weight implementations: SQLite, Java DB (formally
Derby), Berkeley DB (Oracle has a Java implementation), and of course
the non-Java but very popular memcached.

From: Boris Punk on
Cheers