You are not logged in.
Hi All,
I hope you are well.
I think about building a disk array for some backups using spinning disks or maybe even SSDs.
Can you control the storage hardware so it does not consume electricity, spinning while idle?
This would be a storage that is not needed all the time perhaps mounted and dismounted everyday just for the backups.
Electricity consumption is one thing but also wear if it's to do with mechanical storage device.
Let me know.
Best regards
Last edited by Kardell (2024-05-31 00:14:35)
"Those who don't know history are doomed to repeat it." Edmund Burke
Offline
Yes this is possible, see https://wiki.archlinux.org/title/Hdparm … figuration for reference ![]()
My NAS right now:
$ hdparm -C /dev/sd{a,b}
/dev/sda:
drive state is: standby
/dev/sdb:
drive state is: standby
# initially set with the following
$ hdparm -S 120 /dev/sd{a,b}Offline
That's awesome, thank you!
"Those who don't know history are doomed to repeat it." Edmund Burke
Offline
You may also check if the drives and the controller support power up in stand by. This keeps the disk from spinning up on power on but only when they're requested by the controller.
This can also help to bring up a big array with a smaller power supply which can keep up with the constant load but doesn't have enough oomph for the high inrush current when all drives try to start all at once.
Note: It depends on both the drives and the controller - if the controller doesn't support that feature you xan end up with a "dead" drive unless you connect it to a system which does support it to bring it back to live.
Online
You may also check if the drives and the controller support power up in stand by. This keeps the disk from spinning up on power on but only when they're requested by the controller.
This can also help to bring up a big array with a smaller power supply which can keep up with the constant load but doesn't have enough oomph for the high inrush current when all drives try to start all at once.
Note: It depends on both the drives and the controller - if the controller doesn't support that feature you xan end up with a "dead" drive unless you connect it to a system which does support it to bring it back to live.
Just an update re this. I have two of these 3.5" NAS hard drives: Seagate IronWolf Pro but I cannot see them (the driver) supporting the standby or spindown.
hdparm gives me:
hdparm -C /dev/sd{b,c}
/dev/sdb:
SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 81 0a 00 82 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
drive state is: unknown
/dev/sdc:
SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 81 0a 00 81 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
drive state is: unknownwhile sdparm does not list SCT or STANDBY
sdparm -l -a /dev/sdb
/dev/sdb: ATA ST10000NE0008-1Z SS02
Direct access device specific parameters: WP=0 DPOFUA=0
Read write error recovery [rw] mode page:
AWRE 1 [cha: n, def: 1] Automatic write reallocation enabled
ARRE 0 [cha: n, def: 0] Automatic read reallocation enabled
TB 0 [cha: n, def: 0] Transfer block
RC 0 [cha: n, def: 0] Read continuous
EER 0 [cha: n, def: 0] Enable early recovery (obsolete)
PER 0 [cha: n, def: 0] Post error
DTE 0 [cha: n, def: 0] Data terminate on error
DCR 0 [cha: n, def: 0] Disable correction (obsolete)
RRC 0 [cha: n, def: 0] Read retry count
COR_S 0 [cha: n, def: 0] Correction span (obsolete)
HOC 0 [cha: n, def: 0] Head offset count (obsolete)
DSOC 0 [cha: n, def: 0] Data strobe offset count (obsolete)
LBPERE 0 [cha: n, def: 0] Logical block provisioning error reporting enabled
MWR 0 [cha: n, def: 0] Misaligned write reporting
WRC 0 [cha: n, def: 0] Write retry count
RTL 0 [cha: n, def: 0] Recovery time limit (ms)
Caching (SBC) [ca] mode page:
IC 0 [cha: n, def: 0] Initiator control
ABPF 0 [cha: n, def: 0] Abort pre-fetch
CAP 0 [cha: n, def: 0] Caching analysis permitted
DISC 0 [cha: n, def: 0] Discontinuity
SIZE 0 [cha: n, def: 0] Size enable
WCE 1 [cha: y, def: 1] Write cache enable
MF 0 [cha: n, def: 0] Multiplication factor
RCD 0 [cha: n, def: 0] Read cache disable
DRRP 0 [cha: n, def: 0] Demand read retention priority
WRP 0 [cha: n, def: 0] Write retention priority
DPTL 0 [cha: n, def: 0] Disable pre-fetch transfer length
MIPF 0 [cha: n, def: 0] Minimum pre-fetch
MAPF 0 [cha: n, def: 0] Maximum pre-fetch
MAPFC 0 [cha: n, def: 0] Maximum pre-fetch ceiling
FSW 0 [cha: n, def: 0] Force sequential write
LBCSS 0 [cha: n, def: 0] Logical block cache segment size
DRA 0 [cha: n, def: 0] Disable read ahead
SYNC_PROG 0 [cha: n, def: 0] Synchronous cache progress indication
NV_DIS 0 [cha: n, def: 0] Non-volatile cache disable
NCS 0 [cha: n, def: 0] Number of cache segments
CSS 0 [cha: n, def: 0] Cache segment size
Control [co] mode page:
TST 0 [cha: n, def: 0] Task set type
TMF_ONLY 0 [cha: n, def: 0] Task management functions only
DPICZ 0 [cha: n, def: 0] Disable protection information check if protect field zero
D_SENSE 0 [cha: y, def: 0] Descriptor format sense data
GLTSD 1 [cha: n, def: 1] Global logging target save disable
RLEC 0 [cha: n, def: 0] Report log exception condition
QAM 0 [cha: n, def: 0] Queue algorithm modifier
NUAR 0 [cha: n, def: 0] No unit attention on release
QERR 0 [cha: n, def: 0] Queue error management
VS_CTL 0 [cha: n, def: 0] Vendor specific [byte 4, bit 7]
RAC 0 [cha: n, def: 0] Report a check
UA_INTLCK 0 [cha: n, def: 0] Unit attention interlocks control
SWP 0 [cha: n, def: 0] Software write protect
ATO 0 [cha: n, def: 0] Application tag owner
TAS 0 [cha: n, def: 0] Task aborted status
ATMPE 0 [cha: n, def: 0] Application tag mode page enabled
RWWP 0 [cha: n, def: 0] Reject write without protection
SBLP 0 [cha: n, def: 0] Supported block lengths and protection information
AUTOLOAD 0 [cha: n, def: 0] Autoload mode
BTP -1 [cha: n, def: -1] Busy timeout period (100us)
ESTCT 30 [cha: n, def: 30] Extended self test completion time (sec)"Those who don't know history are doomed to repeat it." Edmund Burke
Offline
Don't necrobump posts please
This might be an instance of a known kernel bug that I have reported here:
https://lore.kernel.org/all/0bf3f2f0-0f … @heusel.eu
If downgrading the kernel helps it's most likely the same bug
Offline
Don't necrobump posts please
![]()
This might be an instance of a known kernel bug that I have reported here:
https://lore.kernel.org/all/0bf3f2f0-0f … @heusel.euIf downgrading the kernel helps it's most likely the same bug
make sense, I am on latest stable: 6.10.3-arch1-2.
"Those who don't know history are doomed to repeat it." Edmund Burke
Offline
@gromit
I don't see a necrobump here - it's still relevant to the same topic - just a very late reply ... anyway
@OP
PUIS is set with
hdparm -s1 <device>usually you will get this warning:
[main@main ~]$ sudo hdparm -s1 /dev/sda
/dev/sda:
Use of -s1 is VERY DANGEROUS.
This requires BIOS and kernel support to recognize/boot the drive.
Please supply the --yes-i-know-what-i-am-doing flag if you really want this.
Program aborted.
[main@main ~]$unfortunate at least I don't know of a way to get any information about if the controller or the drive does support it other than just yolo and try it
for me when I set my drives to PUIS they will get started one at a time very early on boot when udev runs the device discovery - but I know that this is specific to how my hardware and drives works
other possible options:
- neither the drives nor the controller does support it anyway - tryin to use -s1 to enable PUIS will do nothin
- only one of either the controller of the drives support it but not both - maybe same result as above - but could also lead to "dead" drives
- both controller and drives support it - should work fine as long as you controller is also support by the kernel
- any other combination or additional drivers or driver configuration options or option-rom / firmware of the controller - anything between nothin, "dead" drives and working
!be aware - you've been warned - this can lead to non-working drives with incompatible hardware - do it on your own risk!
the s-ata specs has another feature called staggered spin-up - from the effect it's the same: the drive isn't spun up until the controller tells it to - but it requires a special power connector often only found on backplanes require special controllers and sometimes software - maybe a special-purpose NAS does provide such a feature
anyway - when using an array for occasional backups I would look for an external enclosure without custom frmware but a regular amd64 hardware and setup my own system I then would only power up for backups and after checking the backup is good power back down - this also prevents you from losing an always mounted backup by some ransomware (which cause a lot of data loss for many who did that)
Last edited by cryptearth (2024-08-11 14:23:19)
Online