You are not logged in.
My ISP's webserver which I am using as a repo doesn't allow live dir listings. In other words, if a user simply browses to the site root in a browser, a message comes up telling them the pages doesn't exist.
The structure of my site is trivial:
/
/i686
/x86_64
I'd like a nice script that simply converts the output of an "ls -l" to html with basic click-able links for the files and simply prints the date/time stamp and file size.
Example:
file1 Jan 15 16:55 25M
file2 Jan 15 16:55 25M
file3 Jan 15 16:55 25M
I was thinking about something like:
#!/bin/bash
date >> /dev/shm/ls.html
echo "<br>" >> /dev/shm/ls.html
for i in * ; do
#echo item: $i
echo "$i" "<br>" >> /dev/shm/ls.html
done
Is there something out there that's pre-made and more visually appealing? If not, how can I store and parse the output of an ls -lh to the following format (for each item):
filename datestamp size
EDIT: From what I'm finding via google, perl is probably the right scripting language (not bash) to accomplish this task. I know 0 about perl
Last edited by graysky (2011-01-23 20:43:17)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
GNU stat would work just fine for this. Do not parse ls.
exec > /path/to/output.html
printf '<html>\n<head>\n<body>\n<table>\n'
for file in *; do
stat "$file" -c '<td>%A</td><td>%U</td><td>%G</td><td>%B</td><td>%y</td><td>%n</td>'
done
printf '</table>\n</body>\n</head>\n</html>\n
This should get you started.
Last edited by falconindy (2011-01-23 15:11:04)
Offline
You can use tree, it'll export to html.
Offline
@falconindy - thanks for the skeleton. I may play more with it.
@dmz - Tree is very nice. Thanks for the suggestion.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline