You are not logged in.
Yea ... updating didn't fix anything.
Res Publica Non Dominetur
Laptop: Arch x86 | Thinkpad X220 | Core i5 2410-M | 8 GB DDR3 | Sandy Bridge
Desktop: Arch x86_64 | Custom | Core i7 920 | 6 GB DDR3 | GeForce 260 GTX
Offline
Fixed! Turns out I had an old cudart.dll.so I had manually placed in a libs folder it was pulling from instead of the latest in /usr/lib32/wine
Res Publica Non Dominetur
Laptop: Arch x86 | Thinkpad X220 | Core i5 2410-M | 8 GB DDR3 | Sandy Bridge
Desktop: Arch x86_64 | Custom | Core i7 920 | 6 GB DDR3 | GeForce 260 GTX
Offline
Hi all,
Is the current gpu client working with latest updates (pacman -Syu)?
Diesel1.
Last edited by diesel1 (2010-11-06 01:01:38)
Registered GNU/Linux user #140607.
Offline
Hi all,
Is the current gpu client working with latest updates (pacman -Syu)?
Diesel1.
Well I thought I was, but I in fact am not.
[21:45:27] Completed 100%
[21:45:27] Successful run
[21:45:27] DynamicWrapper: Finished Work Unit: sleep=10000
[21:45:37] Reserved 94912 bytes for xtc file; Cosm status=0
[21:45:37] Allocated 94912 bytes for xtc file
[21:45:37] - Reading up to 94912 from "work/wudata_03.xtc": Read 94912
[21:45:37] Read 94912 bytes from xtc file; available packet space=786335552
[21:45:37] xtc file hash check passed.
[21:45:37] Reserved 28296 28296 786335552 bytes for arc file=<work/wudata_03.trr> Cosm status=0
[21:45:37] Allocated 28296 bytes for arc file
[21:45:37] - Reading up to 28296 from "work/wudata_03.trr": Read 28296
[21:45:37] Read 28296 bytes from arc file; available packet space=786307256
[21:45:37] trr file hash check passed.
[21:45:37] Allocated 560 bytes for edr file
[21:45:37] Read bedfile
[21:45:37] edr file hash check passed.
[21:45:37] Allocated 31574 bytes for logfile
[21:45:37] Read logfile
[21:45:37] GuardedRun: success in DynamicWrapper
[21:45:37] GuardedRun: done
[21:45:37] Run: GuardedRun completed.
[21:45:41] + Opened results file
[21:45:41] - Writing 155854 bytes of core data to disk...
[21:45:41] Done: 155342 -> 132301 (compressed to 85.1 percent)
[21:45:41] ... Done.
[21:45:41] DeleteFrameFiles: successfully deleted file=work/wudata_03.ckp
[21:45:41] Shutting down core
[21:45:41]
[21:45:41] Folding@home Core Shutdown: FINISHED_UNIT
[21:45:45] CoreStatus = C0000005 (-1073741819)
[21:45:45] Client-core communications error: ERROR 0xc0000005
[21:45:45] This is a sign of more serious problems, shutting down.
This is a memory error. To quote the F@H guys:
This is a known Windows memory error, while running the v5.x GUI client with the GUI open while finishing and uploading a work unit. Workarounds include updating the video driver (doesn't always help), keeping the GUI closed near the end of a work unit, or switching to the console client and using a 3rd party utility to see the pretty pictures and monitor the client's progress.
It can also be caused by faulty memory or a bad memory controller, so you should consider both possibilities.
I ran the F@H GPU memtest and it came out clean. Therefore this is a Windows memory error. This happened with GPU3 .... so I tried swapping out the .exe for the GPU2 console .... same error.
I'm out of ideas at this point -- and I've exhausted my Google mojo as well.
Sadly, when you restart the client, it deletes your completed work unit, thus giving you 0 credit. So there's no point in even running the client if you get this error, until you resolve it.
[ EDIT: Looks like I'm not the only one using Arch and having this problem: http://www.overclockers.com/forums/show … p?t=657890 ]
Last edited by georgia_tech_swagger (2010-11-06 22:43:44)
Res Publica Non Dominetur
Laptop: Arch x86 | Thinkpad X220 | Core i5 2410-M | 8 GB DDR3 | Sandy Bridge
Desktop: Arch x86_64 | Custom | Core i7 920 | 6 GB DDR3 | GeForce 260 GTX
Offline
It seems the FAH Gpu wiki has been updated since I last looked it over.
Note that these instructions and wrappers require exactly version 3.0: no earlier or later version will work.
Perhaps thats the problem?? I find it a bit hard to believe but its worth a shot I suppose?
Heres a PKGBUILD for cuda 3.0 I skipped right over 3.0 and went to 3.1 since it was the latest version when I got around to fixing my packages on the aur
See if that fixes the problem?
Offline
It seems the FAH Gpu wiki has been updated since I last looked it over.
Note that these instructions and wrappers require exactly version 3.0: no earlier or later version will work.
Perhaps thats the problem?? I find it a bit hard to believe but its worth a shot I suppose?
Heres a PKGBUILD for cuda 3.0 I skipped right over 3.0 and went to 3.1 since it was the latest version when I got around to fixing my packages on the aur
See if that fixes the problem?
Running a WU with CUDA 3.0 now. We'll see. For giggles, I first tried with CUDA 3.2, modifying the current AUR build (and commenting out some man related commands that were invalid in the PKGBUILD). Same error.
Res Publica Non Dominetur
Laptop: Arch x86 | Thinkpad X220 | Core i5 2410-M | 8 GB DDR3 | Sandy Bridge
Desktop: Arch x86_64 | Custom | Core i7 920 | 6 GB DDR3 | GeForce 260 GTX
Offline
Bam ... it's working. That fixed it.
Res Publica Non Dominetur
Laptop: Arch x86 | Thinkpad X220 | Core i5 2410-M | 8 GB DDR3 | Sandy Bridge
Desktop: Arch x86_64 | Custom | Core i7 920 | 6 GB DDR3 | GeForce 260 GTX
Offline
Alright thanks for testing that out!
Downgraded the aur package accordingly
Offline
Hi Guys,
I just started up again earlier this week and hope to continue folding for a while on my newish 6 core phenom II. It is cold here now so I don't mind my PC heating the apartment all day!
There's another motherboard and dual GPUs in my closet gathering dust so maybe i'll dig that out and get it set up as well.
Offline
Is there anyone else out there having problems with a GTX 470? Here's the output I get (in order for it to do anything, I have to pass -forcegpu nvidia_fermi.)
Launch directory: Z:\opt\fah-gpu
Executable: Z:\opt\fah-gpu\Folding@home-Win32-GPU.exe
Arguments: -forcegpu nvidia_fermi
[11:39:19] - Ask before connecting: No
[11:39:19] - User name: wtchappell (Team 45032)
[11:39:19] - User ID: 66CBB2E214721AF8
[11:39:19] - Machine ID: 2
[11:39:19]
[11:39:19] Gpu species not recognized.
[11:39:19] Loaded queue successfully.
[11:39:19]
[11:39:19] + Processing work unit
[11:39:19] Core required: FahCore_15.exe
[11:39:19] Core found.
[11:39:19] Working on queue slot 01 [November 29 11:39:19 UTC]
[11:39:19] + Working ...
err:module:import_dll Library cudart32_30_14.dll (which is needed by L"Z:\\opt\\fah-gpu\\FahCore_15.exe") not found
err:module:import_dll Library cufft32_30_14.dll (which is needed by L"Z:\\opt\\fah-gpu\\FahCore_15.exe") not found
err:module:LdrInitializeThunk Main exe initialization for L"Z:\\opt\\fah-gpu\\FahCore_15.exe" failed, status c0000135
[11:39:23] CoreStatus = C0000135 (-1073741515)
[11:39:23] Client-core communications error: ERROR 0xc0000135
[11:39:23] This is a sign of more serious problems, shutting down.
Last edited by wtchappell (2010-11-29 11:40:31)
Offline
Hi wtchappel,
I am having trouble getting the GPU client running on a GTX470 as well. I've tried using the aur package as well as various versions found at http://www.stanford.edu/~friedrim/ installed via wine.
Those DLLs, along with a few others can be found in the zip file the aur package downloads as it's source file: http://www.stanford.edu/~friedrim/.Fold … XP-631.zip
You can try putting them in /opt/fah-gpu or /opt/fah-gpu/alpha
I haven't figured this out yet but I have noticed the message: [11:39:19] Gpu species not recognized. I get this same message so I'm wondering if the 6.31 version of the GPU client does not support the GTX 470. I'll continue to tinker with this but right now I don't have time to experiment further.
Offline
dmartins:
The supplied dll files from stanford will not work as they are designed for windows machines. If they did work there would be no point for the nvcuda wrapper files that Shelnutt made.
And your only getting this error when passing the fermi flag?
Theres an option to use fermi instead of....nevermind it seems I never uploaded that version of my fah conf file :\ justa sec
edit:
I had this in there at somepoint somehow it never made it to the aur...try using release 3 I just uploaded and in the conf file in /etc/conf.d/foldingathome theres now an option to use either -force nvidia_g80 or -force nvidia_fermi
Last edited by whaevr (2010-11-29 23:24:45)
Offline
Ok, so it turns out I had a few minutes free after all..
Here's how I got the GPU client working on a Nvidia GTX 470 under Arch x64_86 (testing).
Install lib32-nvidia-utils from multilib (currently 260.19.21-1)
Install lib32-cuda-toolkit from aur (currently 3.0-1)
Install lib32-nvcuda from aur (currently 3.0-3)
Install wine from multilib (currently 1.3.8-1)
Download http://www.stanford.edu/~friedrim/Foldi … ay-641.msi
Install by running msiexec /i Folding@home-Win32-GPU-systray-641.msi
This installs to ~/.wine/drive_c/Program Files (x86)/Folding@home/Folding@home-gpu
Copy /usr/lib32/wine/cudart* and /usr/lib32/wine/cufft* to ~/.wine/drive_c/Program Files (x86)/Folding@home/Folding@home-gpu
In a console, change directory to ~/.wine/drive_c/Program Files (x86)/Folding@home/Folding@home-gpu
Run wine Folding@home.exe -forcegpu nvidia_fermi and begin folding!
I prefer not to have init scripts running my folding clients so for now, this will work for me. I'll provide feedback as to whether or not a work unit completes sucessfully. Right now I have the SMP CPU client running on all 6 cores plus the GPU client running. The GPU client is completeing a percentage point every 1.5 minutes. Wowza!
Offline
Alright then glad to hear you got it working!
Offline
Well, the GPU client completed a work unit while I was out so it seems to be working properly.
There's a couple things I'm not sure about. Why do I have to copy or link the dlls from /usr/lib32/wine to the folding at home directory? What's the point of installing them to the lib32 directory if they aren't picked up automatically by wine? This doesn't seem right to me but I don't know enough about wine.
Also, I'm seeing a big drop in the performance of the SMP client (each percentage point takes 80% longer). We're talking from 9 minutes to 17 minutes. I expect some slow down since the GPU client is using 50% of one core. The rest may be due to a "load imbalance"
Received the second INT/TERM signal, stopping at the next step
Average load imbalance: 31.1 %
Part of the total run time spent waiting due to load imbalance: 12.0 %
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % Y 0 %
NOTE: 12.0 % performance was lost due to load imbalance
in the domain decomposition.
Parallel run - timing based on wallclock.
NODE (s) Real (s) (%)
Time: 14417.809 14417.809 100.0
4h00:17
(Mnbf/s) (GFlops) (ns/day) (hour/ns)
Performance: 193.681 10.921 9.494 2.528
I'm going to try running the SMP client on 5 out of 6 cores and see if it works any better. Has anyone else experimented with running both the SMP and GPU clients and how to best balance them?
Offline
The GPU2 wrappers were picked up automatically by wine when I placed them in the /usr/lib32/wine folder, problem is the gpu3 client i think expects on of the dll's to be in the same directory as the client executable. For me if I made a single symlink to /usr/lib32/wine/cudart.dll.so from a nvcuda.dll in the clients directory it runs fine with whats placed in /usr/lib32/wine thats why the aur packages are installed the way they are.
This all being said with core11 though, I do not have a fermi board to test and use to my advantage. g80 works great with what I have setup in the aur because thats what I have to work with.
And for your performance issues with the smp client try running the automater script it changes the SLEEPWAIT value of the wrappers depending on the % of the CPU the GPU client is using.
Offline
Hi whaevr,
That makes sense about where it's looking for the DLLs. I will have a look at creating an aur package for fermi cards as there does seem to be some differences.
Thanks for the link. I had not come across that page yet and it looks like it has some good information.
Cheers!
Offline
Ok, so it turns out I had a few minutes free after all..
Here's how I got the GPU client working on a Nvidia GTX 470 under Arch x64_86 (testing).
Install lib32-nvidia-utils from multilib (currently 260.19.21-1)
Install lib32-cuda-toolkit from aur (currently 3.0-1)
Install lib32-nvcuda from aur (currently 3.0-3)
Install wine from multilib (currently 1.3.8-1)Download http://www.stanford.edu/~friedrim/Foldi … ay-641.msi
Install by running msiexec /i Folding@home-Win32-GPU-systray-641.msi
This installs to ~/.wine/drive_c/Program Files (x86)/Folding@home/Folding@home-gpu
Copy /usr/lib32/wine/cudart* and /usr/lib32/wine/cufft* to ~/.wine/drive_c/Program Files (x86)/Folding@home/Folding@home-gpu
In a console, change directory to ~/.wine/drive_c/Program Files (x86)/Folding@home/Folding@home-gpu
Run wine Folding@home.exe -forcegpu nvidia_fermi and begin folding!
I prefer not to have init scripts running my folding clients so for now, this will work for me. I'll provide feedback as to whether or not a work unit completes sucessfully. Right now I have the SMP CPU client running on all 6 cores plus the GPU client running. The GPU client is completeing a percentage point every 1.5 minutes. Wowza!
Is this on an up to date kernel etc......
Diesel1.
Registered GNU/Linux user #140607.
Offline
Is this on an up to date kernel etc......
Diesel1.
I'm running the x86_64 testing repo. Last updated on Nov 27. Kernel is 2.6.36-3.
Offline
diesel1 wrote:Is this on an up to date kernel etc......
Diesel1.
I'm running the x86_64 testing repo. Last updated on Nov 27. Kernel is 2.6.36-3.
Thanks for the info, I think I might update my system then try the new AUR package!
Diesel1.
Registered GNU/Linux user #140607.
Offline
Hi. I'm trying to use Folding@Home on my brand new quad-core Core i5. It's utilizing one core to its fullest capacity, but ignoring the other 3. I am running the smp client and am even using the '-smp 4' command-line flag, but still, no dice. While loading up the core, I get the error: "[15:11:25] Work type 78 not eligible for variable processors". If it's a problem with the work unit type, how can I ensure that I get a type that is eligible for using different processor cores?
Blog .:. AUR .:. Wiki Contributions
Registered Linux User #506070.
Offline
Hi Guys,
I just started up again earlier this week and hope to continue folding for a while on my newish 6 core phenom II. It is cold here now so I don't mind my PC heating the apartment all day!
There's another motherboard and dual GPUs in my closet gathering dust so maybe i'll dig that out and get it set up as well.
A blast from the past, long time no see. Welcome back, it's good seeing the dmartins moniker in the top 20 producers again.
Pudge
Offline
Hey Pudge, thanks for the welcome! It's good to see you're still folding and still at the top of the heap! What are you folding on to get 34000 PPD? That's pretty impressive!
Offline
Hi. I'm trying to use Folding@Home on my brand new quad-core Core i5. It's utilizing one core to its fullest capacity, but ignoring the other 3. I am running the smp client and am even using the '-smp 4' command-line flag, but still, no dice. While loading up the core, I get the error: "[15:11:25] Work type 78 not eligible for variable processors". If it's a problem with the work unit type, how can I ensure that I get a type that is eligible for using different processor cores?
It sounds like maybe you started the client once without the -smp flag and it has downloaded a work unit for a single core processor.. You should be safe to stop the client and delete everything in your folding directory except for fah6, mpiexec and client.cfg. This should get rid of the "bad" core and work unit. Start the client back up with the -smp and it should download the appropriate core.
Offline
Julius2 wrote:Hi. I'm trying to use Folding@Home on my brand new quad-core Core i5. It's utilizing one core to its fullest capacity, but ignoring the other 3. I am running the smp client and am even using the '-smp 4' command-line flag, but still, no dice. While loading up the core, I get the error: "[15:11:25] Work type 78 not eligible for variable processors". If it's a problem with the work unit type, how can I ensure that I get a type that is eligible for using different processor cores?
It sounds like maybe you started the client once without the -smp flag and it has downloaded a work unit for a single core processor.. You should be safe to stop the client and delete everything in your folding directory except for fah6, mpiexec and client.cfg. This should get rid of the "bad" core and work unit. Start the client back up with the -smp and it should download the appropriate core.
Yeah, that was probably what happened. I (unfortunately) assumed that, since I was downloading the SMP client, it would get work that could be used on multiple cores. Since the work unit is already 62% done, I'll just wait for it to finish and be careful to use the -smp flag in the future. Thanks.
Blog .:. AUR .:. Wiki Contributions
Registered Linux User #506070.
Offline