You are not logged in.

#1 2018-04-08 11:07:16

stefan230
Member
Registered: 2018-04-08
Posts: 11

Building and installing AMDDPU-PRO 17.50

Hello,

Since the AUR-package of AMDGPU-PRO is not really getting an update in a while I thought: "well why not trying building it myself" (the original package can be found here: https://aur.archlinux.org/packages/amdgpu-pro/).
I basicly took the PKGBUILD of the 17.40 version and modded it to fit the packages found in the 17.50 amd-driver-release. (You can look into my modified PKGBUILD here: https://hastebin.com/uzolahazaq.bash)

the building process went fine after editing the PKGBUILD to fit the contents of the 17.50.
then it should install the packages created, after resolving all the dependencies, I still have issues with conflicting packages left (it basically comes down to that lib32-libglvnd/libglvnd cant be removed since lib32-mesa/mesa are dependent on them: https://hastebin.com/ifotevigaf.coffeescript)

So I removed mesa and lib32-mesa temporarily to see what happens.
After removing the packages I went through on the dependencies only to be greated by this: https://hastebin.com/hazutasehi.rb (here is to note that "existiert in" translates to "exists in") So it seems there are files doubled in lib32-amdgpu-pro and amdgpu-pro?

So I'm looking for your help on resolving this issues, to see if I can install amdgpu-pro on my arch-maschine.

for last here is some data on my maschine: https://hastebin.com/zosayixaqi.coffeescript (CPU is a AMD R5 2400G and the GPU is an RX480 8GB, on latest linux-git-kernel 4.16.0-rc7)

Offline

#2 2018-04-08 13:47:06

Lone_Wolf
Member
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 11,868

Re: Building and installing AMDDPU-PRO 17.50

Have lib32-* dpend on their x86_64 counterparts, then you can remove the include and man files *

The same should be true for the bin files .

From the looks of it this package doesn't build stuff but repackages binary files.

I do wonder why there's an extract _deb function, makepkg is able to extract *.deb archives .
All you need to do then is to extract the data.tar.xz to the correct locations

as for libglvnd : afaik amdgpu-pro doesn't support it, if you do need mesa functionality you'll have to build your own without glvnd support.

EDIT : i'd use one pacakge for x86_64 and another for lib32 .
That allows users to choose whether they need the lib32 stuff .

Last edited by Lone_Wolf (2018-04-08 13:48:24)


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.


(A works at time B)  && (time C > time B ) ≠  (A works at time C)

Offline

#3 2018-04-08 17:49:10

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

Lone_Wolf wrote:

Have lib32-* dpend on their x86_64 counterparts, then you can remove the include and man files *

May I get an example what to do to achive this. I'm pretty new to the whole "make your own package thing"

From the looks of it this package doesn't build stuff but repackages binary files.

now that you say that might actually be true. Since there was no real building involved. Sure I why should build it anew since there is already a package for ubuntu, maybe that was the intention behind that. Since its not really my PKGBUILD I cant tell you.
But you may be right.

I do wonder why there's an extract _deb function, makepkg is able to extract *.deb archives .
All you need to do then is to extract the data.tar.xz to the correct locations

agian its basically not my Build-file. So what do you think could I achive the same thing with less code?

EDIT : i'd use one pacakge for x86_64 and another for lib32 .
That allows users to choose whether they need the lib32 stuff .

also sounds like a idea. I will look into that when I get it actually to install.

Offline

#4 2018-04-10 10:10:19

Lone_Wolf
Member
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 11,868

Re: Building and installing AMDDPU-PRO 17.50

This is the first time i looked thoroughly at the amdgpu-pro pkgbuilds.
It does seem like they were originally written for an older pacman with less capabilities and maintainers only focused on keeping them working since.

Getting amdgpu-pro working on arch has often been described as a pita, giving weird errors and long delays between updates to the package.
A refactoring of the code would be a good idea i think.

In my experience many of the best AUR maintainers are those that have an interest in using the package.

Stefan230, are you interested enough in using amdgpu-pro to consider maintaining it ?
If so, i expect there are enough people around that are willing to help.

Start with changing the title of this thread (or create a new one).
The refactoring should start with removing the lib32-* parts and a name change, probably to amdgpu-pro-bin .
Making this a pure x86_64 PKGBUILD will simplify things and the name change will help to avoid conflicts and confusion with the existing package.

(Also if you have a well-written x86_64 pkgbuild, making its lib32 counterpart tends to be easy, mainly requiring some different compiler flags)

Edits: typos

Last edited by Lone_Wolf (2018-04-10 10:27:15)


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.


(A works at time B)  && (time C > time B ) ≠  (A works at time C)

Offline

#5 2018-04-10 17:31:03

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

Hey,
Thanks again for your time. Actually I can see where you're coming from with amdgpu-pro being bit of a "pita". I remember back when 17.10 was released you had to downgrade the kernel and the xserver.
The other problem is that amd is really not that reliable when it comes to update ist linux drivers. the latest version for linux is 17.50, which was released back in december 2017, That is really old when you think about the rolling release model arch uses. the amdgpu-pro-drivers aren really updated that often, since they originally publish it for ubuntu which relies more on stable packages and seldom update kernel and xserver. So its fine for that but for arch it doesnt seem right.

The other thing is that I cant tell if there is really any benefit to the proprietary driver when compared to the full open-source driver stack. I was researching a bit on the topic on phoronix, especially on vulkan performance. It seems like the open-source driver is mostly the better choice, but I have to do more research on that topic to deside if its worth my, your and all the others time to get a package working which isnt even better than the opensource-driver.

I have to do some research on the topic. The package would be only of interest for me when it provides more performance than the open-source one. 

May I still ask you what "maintaining a package" would involve to do and how much time I should have for it?

Offline

#6 2018-04-11 11:55:07

Lone_Wolf
Member
From: Netherlands, Europe
Registered: 2005-10-04
Posts: 11,868

Re: Building and installing AMDDPU-PRO 17.50

The main difference between open source driver and amdgpu-pro atm is OpenCL support.
The opencl support in mesa only supports openCL 1.1, while amdgpu-pro is OpenCL 2 or later.

GpuOpen / RocM is AMD open source solution for OpenCL and but that relies on not yet-upstreamed-kernel and llvm/clang changes.
It does look like linux 4.17 and llvm 6 will allow Vega cards to use RocM for OpenCL, but older (GCN) cards will have to wait until kernel 4.18 .
Late 2018 or early 2019 seems a realistic time for decent (maybe even great)  AMD open source OpenCL support.

As for maintaining an aur package :

update when new versions come out
Try to solve build problems (how far you go with that varies between maintainers)
respond to user questions on aur page

Time needed varies from a few minutes per week to 1 or 2 hours if there is a new version.
Build problems can take many hours


Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.


(A works at time B)  && (time C > time B ) ≠  (A works at time C)

Offline

#7 2018-04-11 12:08:07

progandy
Member
Registered: 2012-05-17
Posts: 5,184

Re: Building and installing AMDDPU-PRO 17.50

Most likely you'll have to use mesa-noglvnd from the AUR if you need amdgpu-pro.


| alias CUTF='LANG=en_XX.UTF-8@POSIX ' |

Offline

#8 2018-04-11 17:01:24

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

Again thanks for your reply. @Lone_Wolf. The thing is I've done some research too. As of by now dont really have a use for the amdgpu-pro-driver. according to phoronix (tested back in december when the driver was new) the open-source was only slightly worse in some games. the only thing that would make me consider using it, is when it performs better for emulators. But that I maybe need to test myself. So first off thank you very much for all the info you gave. I will have to test and see if its worth building and maintaining the closed-source drivers. When I made up my mind about maintaining it I will just post another thread and ask for help to modernize the PKGBUILD.

little edit: since I dont really use OpenCL this wouldnt be a feature really need, to consider maintaining the packages.

Last edited by stefan230 (2018-04-11 17:02:31)

Offline

#9 2018-04-11 18:55:00

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

May i just take a bit more of your time? on the tip of progandy I installed the correct mesa-drivers and intsll went flawless, but the dkms-module wont build. So maybe someone can give a hint whats going wrong when the module should be build? I'grabbed you a log of the process here: https://hastebin.com/jijipokaqe.cs

Offline

#10 2018-04-11 21:05:23

loqs
Member
Registered: 2014-03-06
Posts: 17,195

Re: Building and installing AMDDPU-PRO 17.50

You are trying to compile it for a 4.16 kernel what compatibility changes have you made to try and facilitate that?

Online

#11 2018-04-12 05:20:21

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

also tried the linux-kernel from the repos. 4.15.15-1 should that one be. 
I havent made any changes whatsoever since didnt thought of that being an issue. So where can start reading/learning what I need to know?

Offline

#12 2018-04-12 07:33:31

loqs
Member
Registered: 2014-03-06
Posts: 17,195

Re: Building and installing AMDDPU-PRO 17.50

  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_drm.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/main.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_drv.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkfd/kfd_module.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/symbols.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_device.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_memory.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_fence.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_tt.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_kms.o
In file included from /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkfd/kfd_module.c:27:0:
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkfd/kfd_priv.h:698:22: error: field ‘mmu_notifier’ has incomplete type
  struct mmu_notifier mmu_notifier;
                      ^~~~~~~~~~~~
make[2]: *** [scripts/Makefile.build:324: /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkfd/kfd_module.o] Error 1
make[1]: *** [scripts/Makefile.build:583: /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkfd] Error 2
make[1]: *** Waiting for unfinished jobs....
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_atombios.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_fence_array.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_bo.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_kthread.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_bo_util.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/atombios_crtc.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_io.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_mn.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_reservation.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_connectors.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_drm_global.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_bitmap.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/atom.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_bo_vm.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/kcl_pci.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_fence.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_ttm.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_module.o
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_connectors.c: In function ‘amdgpu_connector_ddc_get_modes’:
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_connectors.c:377:3: error: implicit declaration of function ‘drm_edid_to_eld’; did you mean ‘drm_edid_to_sad’? [-Werror=implicit-function-declaration]
   drm_edid_to_eld(connector, amdgpu_connector->edid);
   ^~~~~~~~~~~~~~~
   drm_edid_to_sad
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_object.o
  LD [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdkcl/amdkcl.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_gart.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_object.o
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_fence.c: In function ‘amdgpu_fence_wait_empty’:
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_fence.c:263:17: error: implicit declaration of function ‘ACCESS_ONCE’; did you mean ‘__READ_ONCE’? [-Werror=implicit-function-declaration]
  uint64_t seq = ACCESS_ONCE(ring->fence_drv.sync_seq);
                 ^~~~~~~~~~~
                 __READ_ONCE
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_fence.c: In function ‘amdgpu_fence_driver_init_ring’:
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_fence.c:375:2: error: implicit declaration of function ‘setup_timer’; did you mean ‘setup_irq’? [-Werror=implicit-function-declaration]
  setup_timer(&ring->fence_drv.fallback_timer, amdgpu_fence_fallback,
  ^~~~~~~~~~~
  setup_irq
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_ttm.c: In function ‘amdgpu_ttm_tt_get_user_pages’:
/var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_ttm.c:741:2: error: too many arguments to function ‘release_pages’
  release_pages(pages, pinned, 0);
  ^~~~~~~~~~~~~
In file included from ./arch/x86/include/asm/pgalloc.h:7:0,
                 from ./include/drm/drmP.h:62,
                 from /var/lib/dkms/amdgpu-17.50/511655/build/include/kcl/kcl_drm.h:6,
                 from /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/../include/../backport/backport.h:9,
                 from <command-line>:0:
./include/linux/pagemap.h:121:6: note: declared here
 void release_pages(struct page **pages, int nr);
      ^~~~~~~~~~~~~
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:324: /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_connectors.o] Error 1
make[2]: *** Waiting for unfinished jobs....
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_lock.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_execbuf_util.o
make[2]: *** [scripts/Makefile.build:324: /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_ttm.o] Error 1
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_page_alloc.o
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:324: /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu/amdgpu_fence.o] Error 1
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_bo_manager.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_page_alloc_dma.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_debug.o
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_tracepoints.o
make[1]: *** [scripts/Makefile.build:583: /var/lib/dkms/amdgpu-17.50/511655/build/amd/amdgpu] Error 2
  CC [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/ttm_agp_backend.o
  LD [M]  /var/lib/dkms/amdgpu-17.50/511655/build/ttm/amdttm.o
make: *** [Makefile:1561: _module_/var/lib/dkms/amdgpu-17.50/511655/build] Error 2

Ignoring the first error which is from the kernel config I am using fixed most of the initial batch of errors only to encounter some new ones.
Edit:

diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_connectors.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_connectors.c
index c43c8e9..725b617 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_connectors.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_connectors.c
@@ -239,7 +239,11 @@ amdgpu_connector_update_scratch_regs(struct drm_connector *connector,
 		if (connector->encoder_ids[i] == 0)
 			break;
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		encoder = drm_encoder_find(connector->dev,
+#else
+		encoder = drm_encoder_find(connector->dev, NULL,
+#endif
 					connector->encoder_ids[i]);
 		if (!encoder)
 			continue;
@@ -264,7 +268,11 @@ amdgpu_connector_find_encoder(struct drm_connector *connector,
 	for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
 		if (connector->encoder_ids[i] == 0)
 			break;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		encoder = drm_encoder_find(connector->dev,
+#else
+		encoder = drm_encoder_find(connector->dev, NULL,
+#endif
 					connector->encoder_ids[i]);
 		if (!encoder)
 			continue;
@@ -366,7 +374,9 @@ static int amdgpu_connector_ddc_get_modes(struct drm_connector *connector)
 	if (amdgpu_connector->edid) {
 		drm_mode_connector_update_edid_property(connector, amdgpu_connector->edid);
 		ret = drm_add_edid_modes(connector, amdgpu_connector->edid);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 16, 0)
 		drm_edid_to_eld(connector, amdgpu_connector->edid);
+#endif
 		return ret;
 	}
 	drm_mode_connector_update_edid_property(connector, NULL);
@@ -380,7 +390,11 @@ amdgpu_connector_best_single_encoder(struct drm_connector *connector)
 
 	/* pick the encoder ids */
 	if (enc_id)
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		return drm_encoder_find(connector->dev, enc_id);
+#else
+		return drm_encoder_find(connector->dev, NULL, enc_id);
+#endif
 	return NULL;
 }
 
@@ -1091,7 +1105,11 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
 			if (connector->encoder_ids[i] == 0)
 				break;
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 			encoder = drm_encoder_find(connector->dev, connector->encoder_ids[i]);
+#else
+			encoder = drm_encoder_find(connector->dev, NULL, connector->encoder_ids[i]);
+#endif
 			if (!encoder)
 				continue;
 
@@ -1148,7 +1166,11 @@ amdgpu_connector_dvi_encoder(struct drm_connector *connector)
 		if (connector->encoder_ids[i] == 0)
 			break;
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		encoder = drm_encoder_find(connector->dev, connector->encoder_ids[i]);
+#else
+		encoder = drm_encoder_find(connector->dev, NULL, connector->encoder_ids[i]);
+#endif
 		if (!encoder)
 			continue;
 
@@ -1167,7 +1189,11 @@ amdgpu_connector_dvi_encoder(struct drm_connector *connector)
 	/* then check use digitial */
 	/* pick the first one */
 	if (enc_id)
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		return drm_encoder_find(connector->dev, enc_id);
+#else
+		return drm_encoder_find(connector->dev, NULL, enc_id);
+#endif
 	return NULL;
 }
 
@@ -1310,7 +1336,11 @@ u16 amdgpu_connector_encoder_get_dp_bridge_encoder_id(struct drm_connector *conn
 		if (connector->encoder_ids[i] == 0)
 			break;
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		encoder = drm_encoder_find(connector->dev,
+#else
+		encoder = drm_encoder_find(connector->dev, NULL,
+#endif
 					connector->encoder_ids[i]);
 		if (!encoder)
 			continue;
@@ -1339,7 +1369,11 @@ static bool amdgpu_connector_encoder_is_hbr2(struct drm_connector *connector)
 	for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
 		if (connector->encoder_ids[i] == 0)
 			break;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		encoder = drm_encoder_find(connector->dev,
+#else
+		encoder = drm_encoder_find(connector->dev, NULL,
+#endif
 					connector->encoder_ids[i]);
 		if (!encoder)
 			continue;
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_cs.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_cs.c
index 046ffbc..a04e312 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_cs.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_cs.c
@@ -572,8 +572,12 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 				 * invalidated it. Free it and try again
 				 */
 				release_pages(e->user_pages,
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 					      bo->tbo.ttm->num_pages,
 					      false);
+#else
+					      bo->tbo.ttm->num_pages);
+#endif
 #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 12, 0)
 				drm_free_large(e->user_pages);
 #else
@@ -717,8 +721,12 @@ error_free_pages:
 				continue;
 
 			release_pages(e->user_pages,
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 				      e->robj->tbo.ttm->num_pages,
 				      false);
+#else
+				      e->robj->tbo.ttm->num_pages);
+#endif
 #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 12, 0)
 			drm_free_large(e->user_pages);
 #else
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_drv.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_drv.c
index e55a293..64f9090 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_drv.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_drv.c
@@ -826,7 +826,9 @@ static struct drm_driver kms_driver = {
 	.open = amdgpu_driver_open_kms,
 	.postclose = amdgpu_driver_postclose_kms,
 	.lastclose = amdgpu_driver_lastclose_kms,
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	.set_busid = drm_pci_set_busid,
+#endif
 	.unload = amdgpu_driver_unload_kms,
 	.get_vblank_counter = kcl_amdgpu_get_vblank_counter_kms,
 	.enable_vblank = kcl_amdgpu_enable_vblank_kms,
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fb.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fb.c
index b81ba88..0635b2f 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fb.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fb.c
@@ -334,6 +334,7 @@ static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfb
 	return 0;
 }
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 /** Sets the color ramps on behalf of fbcon */
 static void amdgpu_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
 				      u16 blue, int regno)
@@ -355,10 +356,13 @@ static void amdgpu_crtc_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green
 	*green = amdgpu_crtc->lut_g[regno] << 6;
 	*blue = amdgpu_crtc->lut_b[regno] << 6;
 }
+#endif
 
 static const struct drm_fb_helper_funcs amdgpu_fb_helper_funcs = {
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	.gamma_set = amdgpu_crtc_fb_gamma_set,
 	.gamma_get = amdgpu_crtc_fb_gamma_get,
+#endif
 	.fb_probe = amdgpufb_create,
 };
 
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fence.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fence.c
index 09d5a5c..d0f49dc 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fence.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_fence.c
@@ -242,9 +242,18 @@ void amdgpu_fence_process(struct amdgpu_ring *ring)
  *
  * Checks for fence activity.
  */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 static void amdgpu_fence_fallback(unsigned long arg)
+#else
+static void amdgpu_fence_fallback(struct timer_list *t)
+#endif
 {
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	struct amdgpu_ring *ring = (void *)arg;
+#else
+	struct amdgpu_ring *ring = from_timer(ring, t,
+					      fence_drv.fallback_timer);
+#endif
 
 	amdgpu_fence_process(ring);
 }
@@ -260,7 +269,11 @@ static void amdgpu_fence_fallback(unsigned long arg)
  */
 int amdgpu_fence_wait_empty(struct amdgpu_ring *ring)
 {
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	uint64_t seq = ACCESS_ONCE(ring->fence_drv.sync_seq);
+#else
+	uint64_t seq = READ_ONCE(ring->fence_drv.sync_seq);
+#endif
 	struct dma_fence *fence, **ptr;
 	int r;
 
@@ -300,7 +313,11 @@ unsigned amdgpu_fence_count_emitted(struct amdgpu_ring *ring)
 	amdgpu_fence_process(ring);
 	emitted = 0x100000000ull;
 	emitted -= atomic_read(&ring->fence_drv.last_seq);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	emitted += ACCESS_ONCE(ring->fence_drv.sync_seq);
+#else
+	emitted += READ_ONCE(ring->fence_drv.sync_seq);
+#endif
 	return lower_32_bits(emitted);
 }
 
@@ -372,8 +389,12 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
 	atomic_set(&ring->fence_drv.last_seq, 0);
 	ring->fence_drv.initialized = false;
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	setup_timer(&ring->fence_drv.fallback_timer, amdgpu_fence_fallback,
 		    (unsigned long)ring);
+#else
+	timer_setup(&ring->fence_drv.fallback_timer, amdgpu_fence_fallback, 0);
+#endif
 
 	ring->fence_drv.num_fences_mask = num_hw_submission * 2 - 1;
 	spin_lock_init(&ring->fence_drv.lock);
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_gem.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_gem.c
index 84bfeb3..321ebaa 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_gem.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_gem.c
@@ -470,7 +470,11 @@ int amdgpu_gem_userptr_ioctl(struct drm_device *dev, void *data,
 	return 0;
 
 free_pages:
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	release_pages(bo->tbo.ttm->pages, bo->tbo.ttm->num_pages, false);
+#else
+	release_pages(bo->tbo.ttm->pages, bo->tbo.ttm->num_pages);
+#endif
 
 release_object:
 	kcl_drm_gem_object_put_unlocked(gobj);
@@ -975,11 +979,19 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, void *data)
 	seq_printf(m, "\t0x%08x: %12ld byte %s",
 		   id, amdgpu_bo_size(bo), placement);
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	offset = ACCESS_ONCE(bo->tbo.mem.start);
+#else
+	offset = READ_ONCE(bo->tbo.mem.start);
+#endif
 	if (offset != AMDGPU_BO_INVALID_OFFSET)
 		seq_printf(m, " @ 0x%010Lx", offset);
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	pin_count = ACCESS_ONCE(bo->pin_count);
+#else
+	pin_count = READ_ONCE(bo->pin_count);
+#endif
 	if (pin_count)
 		seq_printf(m, " pin count %d", pin_count);
 	seq_printf(m, "\n");
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_ttm.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_ttm.c
index 45ab81f..07cc736 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_ttm.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_ttm.c
@@ -738,7 +738,11 @@ int amdgpu_ttm_tt_get_user_pages(struct ttm_tt *ttm, struct page **pages)
 	return 0;
 
 release_pages:
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	release_pages(pages, pinned, 0);
+#else
+	release_pages(pages, pinned);
+#endif
 	up_read(&mm->mmap_sem);
 	return r;
 }
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.c
index e339b15..3b4aa20 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.c
@@ -2588,7 +2588,11 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 	u64 flags;
 	uint64_t init_pde_value = 0;
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	vm->va = RB_ROOT;
+#else
+	vm->va = RB_ROOT_CACHED;
+#endif
 	vm->client_id = atomic64_inc_return(&adev->vm_manager.client_counter);
 	for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
 		vm->reserved_vmid[i] = NULL;
@@ -2752,10 +2756,19 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 
 	amd_sched_entity_fini(vm->entity.sched, &vm->entity);
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	if (!RB_EMPTY_ROOT(&vm->va)) {
+#else
+	if (!RB_EMPTY_ROOT(&vm->va.rb_root)) {
+#endif
 		dev_err(adev->dev, "still active bo inside vm\n");
 	}
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	rbtree_postorder_for_each_entry_safe(mapping, tmp, &vm->va, rb) {
+#else
+	rbtree_postorder_for_each_entry_safe(mapping, tmp,
+					     &vm->va.rb_root, rb) {
+#endif
 		list_del(&mapping->list);
 		amdgpu_vm_it_remove(mapping, &vm->va);
 		kfree(mapping);
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.h b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.h
index b6f1dd1..e656ea3 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.h
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/amdgpu_vm.h
@@ -121,7 +121,11 @@ struct amdgpu_vm_pt {
 
 struct amdgpu_vm {
 	/* tree of virtual addresses mapped */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	struct rb_root		va;
+#else
+	struct rb_root_cached	va;
+#endif
 
 	/* protecting invalidated */
 	spinlock_t		status_lock;
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdgpu/dce_v8_0.c b/usr/src/amdgpu-17.50-511655/amd/amdgpu/dce_v8_0.c
index d2f68bd..b35d1c9 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdgpu/dce_v8_0.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdgpu/dce_v8_0.c
@@ -1675,7 +1675,11 @@ static void dce_v8_0_afmt_setmode(struct drm_encoder *encoder,
 	dce_v8_0_audio_write_sad_regs(encoder);
 	dce_v8_0_audio_write_latency_fields(encoder, mode);
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
+#else
+	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
+#endif
 	if (err < 0) {
 		DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
 		return;
@@ -2665,7 +2669,9 @@ static const struct drm_crtc_helper_funcs dce_v8_0_crtc_helper_funcs = {
 	.mode_set_base_atomic = dce_v8_0_crtc_set_base_atomic,
 	.prepare = dce_v8_0_crtc_prepare,
 	.commit = dce_v8_0_crtc_commit,
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)
 	.load_lut = dce_v8_0_crtc_load_lut,
+#endif
 	.disable = dce_v8_0_crtc_disable,
 };
 
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_drm.c b/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_drm.c
index 32f151c..1dac924 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_drm.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_drm.c
@@ -262,7 +262,11 @@ _kcl_drm_atomic_helper_update_legacy_modeset_state_stub(struct drm_device *dev,
 	int i;
 
 	/* clear out existing links and update dpms */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	for_each_connector_in_state(old_state, connector, old_conn_state, i) {
+#else
+	for_each_new_connector_in_state(old_state, connector, old_conn_state, i) {
+#endif
 		if (connector->encoder) {
 			WARN_ON(!connector->encoder->crtc);
 
@@ -287,7 +291,11 @@ _kcl_drm_atomic_helper_update_legacy_modeset_state_stub(struct drm_device *dev,
 	}
 
 	/* set new links */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	for_each_connector_in_state(old_state, connector, old_conn_state, i) {
+#else
+	for_each_new_connector_in_state(old_state, connector, old_conn_state, i) {
+#endif
 		if (!connector->state->crtc)
 			continue;
 
@@ -299,7 +307,11 @@ _kcl_drm_atomic_helper_update_legacy_modeset_state_stub(struct drm_device *dev,
 	}
 
 	/* set legacy state in the crtc structure */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 	for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+#else
+	for_each_new_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+#endif
 		struct drm_plane *primary = crtc->primary;
 
 		crtc->mode = crtc->state->mode;
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_pci.c b/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_pci.c
index a02d317..a6d011f 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_pci.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdkcl/kcl_pci.c
@@ -1,6 +1,6 @@
 #include <kcl/kcl_pci.h>
 
-#if defined(BUILD_AS_DKMS)
+#if defined(BUILD_AS_DKMS) && LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 #define PCI_EXP_DEVCAP2_ATOMIC_ROUTE	0x00000040 /* Atomic Op routing */
 #define PCI_EXP_DEVCAP2_ATOMIC_COMP32	0x00000080 /* 32b AtomicOp completion */
 #define PCI_EXP_DEVCAP2_ATOMIC_COMP64	0x00000100 /* Atomic 64-bit compare */
@@ -87,6 +87,8 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev)
 
 	return 0;
 }
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 EXPORT_SYMBOL(pci_enable_atomic_ops_to_root);
+#endif
 
 #endif
diff --git a/usr/src/amdgpu-17.50-511655/amd/amdkfd/kfd_device.c b/usr/src/amdgpu-17.50-511655/amd/amdkfd/kfd_device.c
index 6f5f93c..da4c291 100644
--- a/usr/src/amdgpu-17.50-511655/amd/amdkfd/kfd_device.c
+++ b/usr/src/amdgpu-17.50-511655/amd/amdkfd/kfd_device.c
@@ -362,7 +362,13 @@ struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd,
 	if (device_info->needs_pci_atomics) {
 		/* Allow BIF to recode atomics to PCIe 3.0 AtomicOps.
 		 */
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 		if (pci_enable_atomic_ops_to_root(pdev) < 0) {
+#else
+		if (pci_enable_atomic_ops_to_root(pdev,
+			PCI_EXP_DEVCAP2_ATOMIC_COMP32 |
+			PCI_EXP_DEVCAP2_ATOMIC_COMP64) < 0) {
+#endif
 			dev_info(kfd_device,
 				"skipped device %x:%x, PCI rejects atomics",
 				 pdev->vendor, pdev->device);
diff --git a/usr/src/amdgpu-17.50-511655/include/kcl/kcl_drm.h b/usr/src/amdgpu-17.50-511655/include/kcl/kcl_drm.h
index 61100e5..eff238d 100644
--- a/usr/src/amdgpu-17.50-511655/include/kcl/kcl_drm.h
+++ b/usr/src/amdgpu-17.50-511655/include/kcl/kcl_drm.h
@@ -277,7 +277,10 @@ static inline int kcl_drm_universal_plane_init(struct drm_device *dev, struct dr
 			     enum drm_plane_type type,
 			     const char *name, ...)
 {
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) || \
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 15, 0)
+		return drm_universal_plane_init(dev, plane, possible_crtcs, funcs,
+				 formats, format_count, NULL, type, name);
+#elif LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) || \
 		defined(OS_NAME_RHEL_7_3) || \
 		defined(OS_NAME_RHEL_7_4)
 		return drm_universal_plane_init(dev, plane, possible_crtcs, funcs,
@@ -330,7 +333,11 @@ static inline int
 kcl_drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
 					  unsigned int pipe,
 					  int *max_error,
+#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 					  struct timeval *vblank_time,
+#else
+					  ktime_t *vblank_time,
+#endif
 #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 13, 0)
 					  unsigned flags,
 #else
diff --git a/usr/src/amdgpu-17.50-511655/include/kcl/kcl_pci.h b/usr/src/amdgpu-17.50-511655/include/kcl/kcl_pci.h
index acb39d3..625eb31 100644
--- a/usr/src/amdgpu-17.50-511655/include/kcl/kcl_pci.h
+++ b/usr/src/amdgpu-17.50-511655/include/kcl/kcl_pci.h
@@ -2,8 +2,9 @@
 #define AMDKCL_PCI_H
 
 #include <linux/pci.h>
+#include <linux/version.h>
 
-#ifdef BUILD_AS_DKMS
+#if defined(BUILD_AS_DKMS) && LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0)
 int pci_enable_atomic_ops_to_root(struct pci_dev *dev);
 #endif
 

This does not compile at least pci_enable_atomic_ops_to_root needs more adjustment and the gamma and lut conversion is incomplete as well as more fixes for 4.14+

Last edited by loqs (2018-04-13 00:28:57)

Online

#13 2018-04-13 17:30:37

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

thank you loqs for your work. I noticed you code looks like the .patch files which were included in the amdgpu-pro package. So I can add your code in a separate .patch-file and build the driver with that patch-file?

Offline

#14 2018-04-13 18:20:07

loqs
Member
Registered: 2014-03-06
Posts: 17,195

Re: Building and installing AMDDPU-PRO 17.50

stefan230 wrote:

So I can add your code in a separate .patch-file and build the driver with that patch-file?

loqs wrote:

This does not compile at least pci_enable_atomic_ops_to_root needs more adjustment and the gamma and lut conversion is incomplete as well as more fixes for 4.14+

Meaning with that patch it still will not compile it needs more patching.  As I was working on the source there seems to have support built in for up to 4.13,
you could try a 4.13 kernel or older or you will need to address the remaining issues.  The amdgpu driver in the kernel already has those issues addressed but then you lose OpenCL support.

Online

#15 2018-04-13 19:54:28

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

downgrading the kernel will be the simplest solution. I try it out 4.13 and report back again.   So I can see if its worth more headaches.

Offline

#16 2018-04-13 20:10:09

loqs
Member
Registered: 2014-03-06
Posts: 17,195

Re: Building and installing AMDDPU-PRO 17.50

I would suggest 4.9 in preference to 4.13 as it is still supported upstream as an lts kernel.
Yes there is no knowing how many hours it would take to make the modules compile under 4.14+ plus how many additional hours to if it is even possible to create a useable result.

Online

#17 2018-04-13 20:10:56

progandy
Member
Registered: 2012-05-17
Posts: 5,184

Re: Building and installing AMDDPU-PRO 17.50

You can look at this patch as well, the readme mentions kernel 4.15.6
https://github.com/yui0/amdgpu-dkms/blo … .el7.patch


| alias CUTF='LANG=en_XX.UTF-8@POSIX ' |

Offline

#18 2018-04-13 20:27:32

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

So I have to look farther into it maybe tomorow or sunday. The dkms-modules build fine when kernel 4.13.12 but no Video after "Reached Target graphical interface". I will check back on kernel 4.9

Offline

#19 2018-04-13 21:23:38

loqs
Member
Registered: 2014-03-06
Posts: 17,195

Re: Building and installing AMDDPU-PRO 17.50

Do you get console output?  The lack of X support is expected.
Edit:
@progandy that patch for 4.15 seems to cover everything I saw that was broken apart from pci_enable_atomic_ops_to_root which is 4.16 only so would be expected to not be patched.

Last edited by loqs (2018-04-13 21:25:13)

Online

#20 2018-04-14 05:47:40

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

loqs wrote:

Do you get console output?  The lack of X support is expected.
Edit:
@progandy that patch for 4.15 seems to cover everything I saw that was broken apart from pci_enable_atomic_ops_to_root which is 4.16 only so would be expected to not be patched.

Console output is all fine. I can also use the tty without a problem. So only the X-support is missing. as used driver my graphics card says "amdgpu" which is expected.

Offline

#21 2018-05-02 04:40:27

stefan230
Member
Registered: 2018-04-08
Posts: 11

Re: Building and installing AMDDPU-PRO 17.50

Hello I'm Back with some news for you all. AMD seems to have released a new driver-revision (18.10). https://support.amd.com/en-us/kb-articl … Notes.aspx 

Its seems like they either added adf (atomic-display framework) support. Or was it in there all the time but arch had a problem cause it was too old? 

Anyways I grab myself a ubuntu install and check them out, before trying to bring them to arch. wish me luck

Offline

Board footer

Powered by FluxBB