You are not logged in.

#1 2013-05-03 01:08:23

Xaero252
Member
Registered: 2011-11-28
Posts: 107

Best approach to a sanity check based ram-to-disk sync?

Here's the scenario;
I'm working on an embedded project using ArchLinux at its core. The application I'm running creates a cache of content utilized by the end-user at first runtime, and updates it each subsequent runtime. If the content directory reaches around 4000 items, this causes menu selections to become quite slow, since its reading from a disk-based cache. This will almost certainly be the case in the deployed application, therefore the appropriate action in my mind is to run as much of the application from memory as possible. Given the gratuitous amount of 4gb of ram, I can copy the entire program directory, a large cache, and all interface elements into ram at boot, and run the application from there (tmpfs entry in fstab -> cp -R from static directory on disk -> ram) This leaves only the actual content elements (~40gb) on the disk for runtime access. The problem with this approach is runtime usage statistics and information relevant to the user experience is also written to the program directory (which is now in ram); this is fine, since it lowers the amount of writes to the solid state disk greatly, and I can sync the data back to the disk on a clean shutdown... however, in the event of power loss or unexpected interruption, all data would be lost.

I know how to quickly check if data was changed, since the tmpfs will have timestamps on all the files. However, what is the best way to sync the data back to the disk? Should I just write a bash script to copy data to tmpfs, start the front-end in the background, and re-launch it if terminated, and also check data every 15 minutes, followed by either doing nothing or copying data to the static directory?

The end goal, obviously is to minimize the amount of "Please Wait..." time there is on screen...

Offline

Board footer

Powered by FluxBB