You are not logged in.
Pages: 1
I've read that to get the best performance out of my agp nvidia card, that I should NOT have AGPGART loaded, but instead have nvidia-agp loaded. (see this broch's post here: http://bbs.archlinux.org/viewtopic.php?id=66464)
edit: apparently this is wrong. I now believe that nvidia-agp (or intel-agp or *-agp) should be blacklisted, and agpgart enabled.
Blacklisting agpgart in rc.conf using "!agpgart" is not sufficient to stop this module from loading. So, I decided to rebuild my kernel without that module altogether. However, I noticed that if I do not set agpgart as (m) or (*) I can not build nvidia-agp (nvidia-agp seems to depend on agpgart)
http://ramikayyali.com/archives/2005/11/27/nvidia says that nvidia-agp does not work when agpgart is loaded. When I run cat /proc/driver/nvidia/agp/status, I get :
$ cat /proc/driver/nvidia/agp/status
Status: Disabled
AGP initialization failed, please check the ouput
of the 'dmesg' command and/or your system log file
for additional information on this problem.
I don't think that I got that error before I started messing with all of this. LOL!
lsmod gives me:
$ lsmod
Module Size Used by
nvidia 7217912 26
ipv6 254452 10
w83627hf 23376 0
hwmon_vid 2816 1 w83627hf
nfsd 218060 8
lockd 64968 1 nfsd
nfs_acl 2816 1 nfsd
auth_rpcgss 32992 1 nfsd
sunrpc 173404 8 nfsd,lockd,nfs_acl,auth_rpcgss
exportfs 4096 1 nfsd
ipt_LOG 5504 3
ipt_REJECT 2816 3
xt_recent 10144 7
xt_tcpudp 2752 1
nf_conntrack_ipv4 12876 1
nf_defrag_ipv4 1600 1 nf_conntrack_ipv4
xt_state 1856 1
nf_conntrack 55488 2 nf_conntrack_ipv4,xt_state
iptable_filter 2496 1
ip_tables 10512 1 iptable_filter
x_tables 12804 6 ipt_LOG,ipt_REJECT,xt_recent,xt_tcpudp,xt_state,ip_tables
reiserfs 228736 2
ext2 63944 1
joydev 9856 0
ppdev 7236 0
lp 9348 0
ppp_generic 22740 0
emu10k1_gp 2560 0
gameport 10312 2 emu10k1_gp
ohci1394 30256 0
ieee1394 80188 1 ohci1394
pcspkr 2304 0
sg 26160 0
shpchp 31572 0
pci_hotplug 25824 1 shpchp
usb_storage 94272 0
usblp 12416 0
usbhid 35296 0
hid 38208 1 usbhid
parport_pc 35844 1
parport 30984 3 ppdev,lp,parport_pc
i2c_nforce2 6468 0
i2c_core 20624 2 nvidia,i2c_nforce2
evdev 9312 6
thermal 15068 0
processor 34608 1 thermal
fan 4164 0
button 5776 0
battery 9988 0
ac 3908 0
nvidia_agp 6300 1
agpgart 29008 2 nvidia,nvidia_agp
fuse 51484 2
snd_emu10k1 139744 0
snd_rawmidi 20864 1 snd_emu10k1
snd_ac97_codec 100388 1 snd_emu10k1
snd_util_mem 3328 1 snd_emu10k1
snd_hwdep 6916 1 snd_emu10k1
snd_seq_oss 30336 0
snd_seq_midi_event 6464 1 snd_seq_oss
snd_seq 49264 4 snd_seq_oss,snd_seq_midi_event
snd_seq_device 6156 4 snd_emu10k1,snd_rawmidi,snd_seq_oss,snd_seq
snd_pcm_oss 38528 0
snd_pcm 70088 3 snd_emu10k1,snd_ac97_codec,snd_pcm_oss
snd_timer 20484 3 snd_emu10k1,snd_seq,snd_pcm
snd_page_alloc 8008 2 snd_emu10k1,snd_pcm
snd_mixer_oss 14656 1 snd_pcm_oss
snd 48676 11 snd_emu10k1,snd_rawmidi,snd_ac97_codec,snd_hwdep,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_pcm,snd_timer,snd_mixer_oss
soundcore 6048 1 snd
ac97_bus 1472 1 snd_ac97_codec
slhc 5696 1 ppp_generic
forcedeth 55568 0
rtc_cmos 11052 0
rtc_core 15768 1 rtc_cmos
rtc_lib 2496 1 rtc_core
sd_mod 24788 8
sr_mod 14500 0
cdrom 33568 1 sr_mod
ohci_hcd 24400 0
ehci_hcd 36172 0
usbcore 138224 6 usb_storage,usblp,usbhid,ohci_hcd,ehci_hcd
ext4 213532 2
mbcache 6656 2 ext2,ext4
jbd2 51480 1 ext4
crc16 1664 1 ext4
sata_nv 22344 2
ata_generic 4676 0
pata_amd 10180 4
pata_acpi 3904 0
libata 160736 4 sata_nv,ata_generic,pata_amd,pata_acpi
scsi_mod 104340 5 sg,usb_storage,sd_mod,sr_mod,libata
dmesg | grep -i agp gives:
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.AGPB._PRT]
Linux agpgart interface v0.103
agpgart: Detected NVIDIA nForce2 chipset
agpgart-nvidia 0000:00:00.0: AGP aperture is 512M @ 0xa0000000
NVRM: not using NVAGP, an AGPGART backend is loaded!
In spite of all of these issues, glxinfo says that direct rendering is enabled, and I can play games just fine (although possibly slower than they should run) My glxgears says that I get about 5600 fps, Which I think is low for a geforce 7600 gt 512 mb.
Is all of this about agpgart being unnecessary and actually performance hobbling completely wrong?
edit: Ok, finally got this working, and it was incredibly easy. Once I realized that agpgart was not the enemy, it was actually nvidia-agp that was blocking the use of nvagp it SHOULD have been a quick fix, but for some reason, my system doesn't like the latest nvidia driver.
This can be done in 3 steps:
1. Blacklist the backend agp driver for your motherboard. (the backend is nvidia-agp, intel-agp, or whatever chipset your motherboard has as this interferes with nvagp)
2. Add the following line to your xorg.conf within the section marked 'device':
Option "NvAgp" "1"
3. Reboot
Thats all! (just like klixon said below)
Last edited by Convergence (2009-03-17 10:39:35)
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
found a thread on another forum trying to solve this issue. Their solution seemed to be to recompile the kernel w/o agp support, but I've tried that!
http://www.ocforums.com/archive/index.php/t-489334.html
maybe their is an older kernel version where nvidia-agp is not dependent on agpgart?
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
I've added the following line to /etc/modprobe.conf:
blacklist agpgart
to no avail.
To agpgart: You win this time, but victory shall soon be mine!!! AHaahahahahahah!!!
Last edited by Convergence (2009-03-11 14:07:41)
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
1)
...So, I decided to rebuild my kernel without that module altogether. However, I noticed that if I do not set agpgart as (m) or (*) I can not build nvidia-agp (nvidia-agp seems to depend on agpgart)
never happened to me, I don't know what you do
2)
found a thread on another forum trying to solve this issue. Their solution seemed to be to recompile the kernel w/o agp support, but I've tried that!
as you mentioned in another post, you could not build kernel without agp so I don't know what you tried.
3)
maybe their is an older kernel version where nvidia-agp is not dependent on agpgart?
no
4) if you cite me, do it correctly:
agpgart.ko -- AGP support
actually this one is potentially a performance culprit when using nvidia drivers. kernel agp is really bad. If you have installed nvidia binary driver, you don't need this one even with AGP card. If you have PCI/PCIE card, this is a waste.
where did I say that
agpgart: considered harmful?
every possible linux benchmark ever done shows that nvidia driver is faster. As you admitted few times, you cant even start, run nvidia agp, so I don't really know what are you comparing. Not to mention that glxgears is not a test for anything except showing that 3d works.
slower does not mean harmful
I tried to rebuild the kernel w/o it, and I can't even do that!
However, I noticed that if I do not set agpgart as (m) or (*) I can not build nvidia-agp (nvidia-agp seems to depend on agpgart)
nvidia-agp is linux kernel module supporting glx on nforce2/3 chipset. This is not nvidia binary.
the only nvidia binary is located in /lib/modules/$(uname -r)/kernel/drivers/video/
and it is called nvidia.ko
finally,
you never posted errors generated during nvidia-installer setup (when installing for the kernel without nvidia/agp) module. So it is difficult to guess what you do wrong.
Last edited by broch (2009-03-11 14:29:02)
Offline
broch: Have I offended you? I didn't mean to. Let it never be said that broch said that agpgart was considered harmful. I didn't have you in mind when I titled this thread. So you should practice what you preach. If you cite me, you should do it correctly. Where did I say "broch says that agpgart: considered harmful"? If you want, I'll remove your name from the original post.
under "finally" you say something that makes me think that I have been working under a false assumption. Are you saying that I should remove any and all agp related modules from the kernel, and then install nvidia's driver? That somehow the nvidia driver fills the roll of agpgart, and a separate agp module is not needed? That would be awesome.
Last edited by Convergence (2009-03-11 21:55:33)
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
Recompiled my kernel with absolutely no support for agp, installed nvidia (there were no errors) and rebooted. Although I hoped that everything would work magically, and nvidia's module would somehow do double duty and fill in the role as an agp, it didn't. (Let it be known at this time that I'm not implying that anyone said that it would) To be honest, I was pretty sure that it wouldn't work, but I thought that I'd try it anyway.
I would like to know if anyone else can duplicate my trouble in compiling nvidia-agp without also compiling agpgart. For lack of a better description, agpgart seems like a directory that can be turned on or off ('on' being either '*' or 'M' and off being self explanatory). If agpgart is turned on, then hitting enter (in menuconfig) enters that directory, and various agp related options become available eg: nvidia-agp, intel-agp etc. When turned off, all options within that 'directory' become unavailable. xconfig has nearly identical behavior.
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
Have you guys read the nvidia README?
On Linux 2.6, the agpgart.ko frontend module will always be loaded, as it is used by the NVIDIA kernel module to determine if an AGPGART backend module is loaded. When the NVIDIA AGP driver is to be used on a Linux 2.6 system, it is recommended that you make sure the AGPGART backend drivers are built as modules and that they are not loaded.
1. the nvidia kernel module uses the agpgart.ko module to determine if a backend driver is loaded. You only need to blacklist the backend driver (in my case !intel_agp in the MODULES array in /etc/rc.conf)
2. Add this to /etc/X11/xorg.conf to your video-card's driver section to tell the nvidia X driver to use nvidia's AGP support:
Option "NvAGP" "1"
all done
Last edited by klixon (2009-03-12 13:15:15)
Stand back, intruder, or i'll blast you out of space! I am Klixon and I don't want any dealings with you human lifeforms. I'm a cyborg!
Offline
I would like to know if anyone else can duplicate my trouble in compiling nvidia-agp without also compiling agpgart. For lack of a better description, agpgart seems like a directory that can be turned on or off ('on' being either '*' or 'M' and off being self explanatory). If agpgart is turned on, then hitting enter (in menuconfig) enters that directory, and various agp related options become available eg: nvidia-agp, intel-agp etc. When turned off, all options within that 'directory' become unavailable. xconfig has nearly identical behavior.
Those options in the agpgart section of config are referring to your motherboard's chipset. They are the backend drivers i referred to in my previous post. You can lspci to figure out which one you need, but if you want to use the agp support provided by nvidia in their binary drivers, you deselect them all and you leave the switch on agpgart (the "master" option to enable it) enabled (either statically or as a module)
Stand back, intruder, or i'll blast you out of space! I am Klixon and I don't want any dealings with you human lifeforms. I'm a cyborg!
Offline
Figured it out!
I was rereading this: http://lj4newbies.blogspot.com/2007/06/ … y-agp.html and I came to the conclusion that this was all a misunderstanding. This is what I concluded:
A. The recommendation is that you disable nvidia-agp, not agpgart! As nvidia-agp.ko can interfere with nvidia.ko. "blacklist nvidia_agp"
B. Agpgart is actually required to have any kind of agp support, period. (again, correct me if I'm wrong) "On Linux 2.6, the agpgart.ko frontend module will always be loaded, as it is used by the NVIDIA kernel module to determine if an AGPGART backend module is loaded."
C. The old recommendation was that you disable agpgart altogether. "Some old tutorials will advice you try to disable agpgart[for 2.4 kernel] but not for 2.6."
Now recompiling. I will compare the output of glxgears (yes I am WELL aware that glxgears is not a benchmark; as I have been for some time! But I also know that it is commonly used to find obvious problems with 3d rendering, and hopefully a comparison might be useful)
Changing the title of this thread to reflect that it has been solved. (unless the guy that wrote that post was wrong of course)
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
Convergence wrote:I would like to know if anyone else can duplicate my trouble in compiling nvidia-agp without also compiling agpgart. For lack of a better description, agpgart seems like a directory that can be turned on or off ('on' being either '*' or 'M' and off being self explanatory). If agpgart is turned on, then hitting enter (in menuconfig) enters that directory, and various agp related options become available eg: nvidia-agp, intel-agp etc. When turned off, all options within that 'directory' become unavailable. xconfig has nearly identical behavior.
Those options in the agpgart section of config are referring to your motherboard's chipset. They are the backend drivers i referred to in my previous post. You can lspci to figure out which one you need, but if you want to use the agp support provided by nvidia in their binary drivers, you deselect them all and you leave the switch on agpgart (the "master" option to enable it) enabled (either statically or as a module)
OH! I missed your post somehow! Had I read it, it probably would have saved me some trouble. It helps that you are confirming my last epiphany. However... in the middle of writing this I realized that:
Ok. Once again dissappointment has struck. After the euphoria of "figuring it all out", I realize that I'm STILL doing something wrong. I have support for agpgart, without support for any of the "backends" which in my case would be nvidia (nforce2 chipset). Anyway, I booted into this system and:
lsmod | grep -i agp
agpgart 29008 1 nvidia
and
glxinfo | grep -i direct
Xlib: extension "Generic Event Extension" missing on display ":0.0".
Xlib: extension "Generic Event Extension" missing on display ":0.0".
direct rendering: Yes
GL_EXT_direct_state_access, GL_EXT_draw_range_elements, GL_EXT_fog_coord,
and
cat /proc/driver/nvidia/agp/status
Status: Enabled
Driver: NVIDIA
AGP Rate: 8x
Fast Writes: Disabled
SBA: Enabled
which is what I was hoping for. However, glxgears renders about 2 fps. No glxgears is not a benchmark, but there is clearly a problem here. I tried nexuiz, and it was rendering so slowly that I had to switch to vt1 to kill the process.
I'm starting to think that there is simply something wrong with my unique hardware or something that requires that I load agp-nvidia. I'd really hate to give up on this however. I know that in the grand scheme of things, it isn't really important, but I've put so much time into it already, that I'd feel like it was a massive waste.
Could a BIOS setting be interfering?
Oh well, re-changing the title of this thread to show that it has NOT been resolved after all.
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
Thought that I should post an excerpt from my xorg.conf:
Section "Device"
Identifier "Device0"
Option "NvAGP" "1" # Tries internal nVidia AGP drivers first
Option "RenderAccel" "true" # Duh :)
Option "AllowGLXWithComposite" "true" # Mostly used for cool effects
Driver "nvidia"
VendorName "NVIDIA Corporation"
EndSection
and just in case:
# modinfo nvidia
filename: /lib/modules/2.6.28.7-ARCH/kernel/drivers/video/nvidia.ko
license: NVIDIA
alias: char-major-195-*
alias: pci:v000010DEd*sv*sd*bc03sc02i00*
alias: pci:v000010DEd*sv*sd*bc03sc00i00*
depends: agpgart,i2c-core
vermagic: 2.6.28.7-ARCH preempt mod_unload K7
parm: NVreg_EnableVia4x:int
parm: NVreg_EnableALiAGP:int
parm: NVreg_ReqAGPRate:int
parm: NVreg_EnableAGPSBA:int
parm: NVreg_EnableAGPFW:int
parm: NVreg_Mobile:int
parm: NVreg_ResmanDebugLevel:int
parm: NVreg_RmLogonRC:int
parm: NVreg_ModifyDeviceFiles:int
parm: NVreg_DeviceFileUID:int
parm: NVreg_DeviceFileGID:int
parm: NVreg_DeviceFileMode:int
parm: NVreg_RemapLimit:int
parm: NVreg_UpdateMemoryTypes:int
parm: NVreg_UseVBios:int
parm: NVreg_RMEdgeIntrCheck:int
parm: NVreg_UsePageAttributeTable:int
parm: NVreg_EnableMSI:int
parm: NVreg_MapRegistersEarly:int
parm: NVreg_RegistryDwords:charp
parm: NVreg_NvAGP:int
Last edited by Convergence (2009-03-13 11:22:15)
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
umm... this has taken a new twist. Somehow even my trusty old fallback kernel lost 3d rendering(just like in my custom kernel, all systems say go, direct rendering = yes etc, but I get 3 fps in glxgears, and games are not playable) even though I have reverted all configs back to their original state. So that means that I borked my system somehow. I've used every method I can think of to restore things to their original settings before I started this project, and still... no acceleration. Think I'll reinstall, and try again!
edit: I've now reinstalled, and still can't get fps higher than 3 fps in any opengl application.
Last edited by Convergence (2009-03-15 14:10:02)
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
OK. I was finally successful. Turns out my system doesn't like the latest nvidia pkg. All that time of fiddling!
Will edit the opening post and the title.
broch and klixon: thanks for the help!
Last edited by Convergence (2009-03-17 00:17:27)
It's a very deadly weapon to know what you're doing
--- William Murderface
Offline
hi!
I'm trying to do this. There are my operations:
requirements:
cat /proc/driver/nvidia/agp/host-bridge
Host Bridge: PCI device 1106:0314
Fast Writes: Supported
SBA: Supported
AGP Rates: 8x 4x
Registers: 0x07000a1b:0x00000000
1)enabled fastwrite and agp as primary in bios (no options related to sba)
2)added !via_agp in the modules section in rc.conf
3)added "Option "NvAGP" "1"" in xorg.conf
4)added "options nvidia NVreg_EnableAGPSBA=1 NVreg_EnableAGPFW=1" in modprobe.conf
and the result is:
cat /proc/driver/nvidia/agp/status
Status: Disabled
AGP initialization failed, please check the ouput
of the 'dmesg' command and/or your system log file
for additional information on this problem.
What i forgot? I can't understand if i shoud add nvidia-agp module to rc.conf and/or recompile kernel.
Offline
Convergence:
I have been following this thread to try and see if I could get nvagp working as well. Alas, my attempts have never met with success. Starting X with NvAGP set to 1 would just hang my box. I've been content just to use agpgart until I came across this today on the nV News Forums:
180.60 for Linux x86/x86-64 released
Release Highlights:
* Fixed VGA console restoration on some laptop GPUs.
* Fixed a bug that caused kernel crashes when attempting to initialize NvAGP on Linux/x86-64 kernels built with the CONFIG_GART_IOMMU kernel option.
* Fixed a bug that caused some performance levels to be disabled on certain GeForce 9 series notebooks.
* Fixed crashes in Bibble 5.
Since zcat /proc/config.gz | grep CONFIG_GART_IOMMU yields:
CONFIG_GART_IOMMU=y
I am hoping that once this version makes it to extra I will be able to use nvagp.
Just thought I'd bump this thread with info if anyone else was interested.
cyclic
Offline
Thank you for this thread, i was pulling my hair out trying to figure out how the hell to get the internal gart working on my old comp. <3
btw
http://lj4newbies.blogspot.com/2007/06/ … y-agp.html
great link wooo! ^^
btw2
190.16 with that guide is smooth as silk, no lockups like before..
Bookmarked this thread in 'Essential Arch' folder.
Last edited by gav616 (2009-07-22 01:21:38)
Offline
Pages: 1