You are not logged in.
Pages: 1
This is a possible bug, or a possible mis-configuration, i haven't been able to determine which.
On a newly established zfs-linux-git raidz2 array im getting failure errors when i try to push files(350GB worth not working) to it across a samba share all the sudden(couple weeks ago actually, and since).
The system that's trying to push the files(freefilesync) is blowing back this error:
Error Code 1: Incorrect function. (DeviceIoControl, FSCTL_SET_SPARSE)
reference particularly this part
So in your case it seems this flag is set, but it does not support sparse files? Do you know some more details about your target file system?
The drive is accessed this way
\\hostname\share\path
not sure if switching that to a different protocol might help, etc.. Im wondering though if its(the sparse feature) misinterpreted as being supported there maybe?
There is the possibility i can look into an ntfs file flag on the files that are being pushed, however I can verify the app has also been using this feature since many versions prior. Also, the file base hasn't changed much either, but its possible i flagged something on them since.
Does this
FSCTL_SET_SPARSE
look like something that can be affected by zfs settings to anyone?
Honestly i didnt have to think much about sparse files until now other than vm stuff, so i dont have a clue what happened, and data is critically not being backed up for the time being.
Any help or suggestions would be appreciated. :-)
Last edited by wolfdogg (2016-11-24 21:39:11)
Offline
I realized i should try to test using drag and drop method from same operating system, and that works for the exact files that were constantly erroring. It looks like the 3rd party app is creating a 0 byte sparse file first, then possibly it trys to grow it, which is where it fails, leaving a 0 byte file signature behind. Any ideas on zfs settings that this 3rd party app might need to get the job done? I tried tweaking with the settings to no avail yet.
Last edited by wolfdogg (2016-11-28 10:29:08)
Offline
Is there anybody that might be able to shed any light on this? I have some 50 year old family pictures that are suffereng proper backup until i get this resolved, i thought i believed in ZFS once, im just stuck now, with 250GB of data still waiting naked to die, or be backed up.
I was wondering, i do remember i had recently checked the check-mark on windows files right click properties details, archive check-mark. This wouldnt cause a SPARSE flag would it?
What caused this?
a) Was it the recreation of the zfs array(going from 4 drive linear zfs span, to a raidz2 6 drive)?
b) was it the archive flag?
c) was it some sparse flag on windows ntfs?
d) is it an update of freefilesync, which is supported this sparse option for years already anyway?
e) did i miss adding a zfs setting this time?
f) does anybody have any procedure or proposal that i can follow?
Last edited by wolfdogg (2016-12-06 23:30:36)
Offline
Go to ZFS, run
touch file
truncate -s 12345 file
stat file
and see if it reports "size: 12345 blocks: 0".
If this doesn't work then the problem is with ZFS, otherwise probably with samba or Windows.
Offline
Interesting. So either switching to raidz2 a new setting on the array is yet to be set that was set last time, but you would think that part would work out of the box, or possibly from previously switching to zfs-linux-git https://aur.archlinux.org/packages/zfs-linux-git/ from zfs-linux, but i think the problem was happening before the switch.
$ stat file
File: 'file'
Size: 12345 Blocks: 1 IO Block: 12800 regular file
Device: 33h/51d Inode: 20523 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/wolfdogg) Gid: ( 100/ users)
Access: 2016-12-10 01:53:55.925153055 -0800
Modify: 2016-12-10 01:54:52.454015852 -0800
Change: 2016-12-10 01:54:52.454015852 -0800
Birth: -
Note, the group "backup" is the group that needs to be permissed to the dir. that wouldnt conflict any with the "Access" being Gid "users" would it? Im guessing that's just output for wolfdogg primary group there ("users"). Its actually ":backup" that takes the inherited permission chain for the backup process, to avoid problems when the parent dirs get an owner change during backup(involuntary or not).
$ ls -la | grep data
drwxrwxr-x 8 wolfdogg backup 19 Dec 10 01:53 data
Sounds more like i have a zfs setting off, or a driver issue to me, but i think this means its not driver issues, but config issue right?
Last edited by wolfdogg (2016-12-10 10:08:17)
Offline
I think your best bet is setting up some experimental ZFS on loop devices and checking whether it's a matter of ZFS driver version, RAIDZ or something else. It seems nobody here knows.
I mean,
for i in {1..6}; do truncate -s 1G disk$i ; losetup -f disk$i ; done
and you have 6 /dev/loop disks to play with.
Offline
Thanks mich41, very helpful. I will try that then, but first, do you think it might be easier for me just to try to switchout the zfs-linux-git to zfs-linux, then reimport the pool, and check that way also? I guess ruling out the driver might be the best first bet though, as you mentioned.
Offline
sorry to necrobump, but I think the solution is to enable ACLs on ZFS. I had the same issue right now, so for future google timetravellers here it is: https://github.com/Microsoft/mssql-docker/issues/288
Offline
Pages: 1