You are not logged in.
I am trying to figure out how I can "figure out" what is causing NetworkManager to peg a single core to 100% shortly after boot up and never goes away. It keeps overall CPU utilization around 13%.
Looking at dbus... there is a constant posting of the following every 2 seconds... which I am not sure if its even related ?
signal time=1549984766.113591 sender=:1.12 -> destination=(null destination) serial=3398 path=/org/freedesktop/NetworkManager/IP6Config/3; interface=org.freedesktop.NetworkManager.IP6Config; member=PropertiesChanged
array [
dict entry(
string "Addresses"
variant array [
struct {
array of bytes [
26 06 a0 00 7f 02 d3 00 a5 02 db 68 04 5c 57 98
]
uint32 64
array of bytes [
fe 80 00 00 00 00 00 00 62 6d c7 ff fe b4 7a 1b
]
}
struct {
array of bytes [
26 06 a0 00 7f 02 d3 00 00 00 00 00 00 00 00 09
]
uint32 128
array of bytes [
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
]
}
struct {
array of bytes [
fe 80 00 00 00 00 00 00 05 60 c5 0b 0e 62 81 2a
]
uint32 64
array of bytes [
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
]
}
]
)
dict entry(
string "AddressData"
variant array [
array [
dict entry(
string "address"
variant string "2606:a000:7f02:d300:a502:db68:45c:5798"
)
dict entry(
string "prefix"
variant uint32 64
)
]
array [
dict entry(
string "address"
variant string "2606:a000:7f02:d300::9"
)
dict entry(
string "prefix"
variant uint32 128
)
]
array [
dict entry(
string "address"
variant string "fe80::560:c50b:e62:812a"
)
dict entry(
string "prefix"
variant uint32 64
)
]
]
)
dict entry(
string "Routes"
variant array [
struct {
array of bytes [
fe 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00
]
uint32 64
array of bytes [
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
]
uint32 100
}
struct {
array of bytes [
26 06 a0 00 7f 02 d3 00 00 00 00 00 00 00 00 00
]
uint32 56
array of bytes [
fe 80 00 00 00 00 00 00 62 6d c7 ff fe b4 7a 1b
]
uint32 100
}
struct {
array of bytes [
26 06 a0 00 7f 02 d3 00 00 00 00 00 00 00 00 00
]
uint32 64
array of bytes [
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
]
uint32 100
}
struct {
array of bytes [
26 06 a0 00 7f 02 d3 00 00 00 00 00 00 00 00 09
]
uint32 128
array of bytes [
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
]
uint32 100
}
]
)
dict entry(
string "RouteData"
variant array [
array [
dict entry(
string "dest"
variant string "fe80::"
)
dict entry(
string "prefix"
variant uint32 64
)
dict entry(
string "metric"
variant uint32 100
)
]
array [
dict entry(
string "dest"
variant string "2606:a000:7f02:d300::"
)
dict entry(
string "prefix"
variant uint32 56
)
dict entry(
string "next-hop"
variant string "fe80::626d:c7ff:feb4:7a1b"
)
dict entry(
string "metric"
variant uint32 100
)
]
array [
dict entry(
string "dest"
variant string "2606:a000:7f02:d300::"
)
dict entry(
string "prefix"
variant uint32 64
)
dict entry(
string "metric"
variant uint32 100
)
]
array [
dict entry(
string "dest"
variant string "::"
)
dict entry(
string "prefix"
variant uint32 0
)
dict entry(
string "next-hop"
variant string "fe80::626d:c7ff:feb4:7a1b"
)
dict entry(
string "metric"
variant uint32 100
)
]
array [
dict entry(
string "dest"
variant string "ff00::"
)
dict entry(
string "prefix"
variant uint32 8
)
dict entry(
string "metric"
variant uint32 256
)
dict entry(
string "table"
variant uint32 255
)
]
array [
dict entry(
string "dest"
variant string "2606:a000:7f02:d300::9"
)
dict entry(
string "prefix"
variant uint32 128
)
dict entry(
string "metric"
variant uint32 100
)
]
]
)
]
Offline
Possibly related 61688.
Offline
Hummm... could very well be...
Offline
I am having this issue as well. From 12 Threads there is one at 100% all the time due to NetworkManager.
Setup 1: Thinkpad T14s G3, 14" FHD - R7 6850U - 32GB RAM - 2TB Solidigm P44 Pro NVME
Setup 2: Thinkpad X1E G1, 15.6" FHD - i7-8850H - 32GB RAM - NVIDIA GTX 1050Ti - 2x 1TB Samsung 970 Pro NVME
Accessories: Filco Majestouch TKL MX-Brown Mini Otaku, Benq XL2420T (144Hz), Lo(w)gitech G400, Puretrak Talent, Sennheiser HD800S + Meier Daccord FF + Meier Classic FF
Offline
Solution: downgrade the curl to the last version then it is working perfectly now
thinkpad x1 carbon 4gen
Offline
I report having same issue -> one process occupying 100% of one core.
Version of networkmanager I use:
$ pacman -Qi networkmanager
Name : networkmanager
Version : 1.14.5dev+17+gba83251bb-2
My current curl version:
$ pacman -Qi curl
Name : curl
Version : 7.64.0-4
I am now on plasma desktop, running following version of plasma-nm:
$ pacman -Qi plasma-nm
Name : plasma-nm
Version : 5.15.0-1
Description : Plasma applet written in QML for managing network connections
journalctl for NetworkManager looks fine, just typical stuff, refreshing leases for addresses every 600 seconds, and reporting successes.
(
If someone else would like to check their journalctl, you may find this usefull:
$ su
# journalctl |grep NetworkManager|tail -n 200 -f
)
Following "downgrade curl" advice and "Comment by Josh T (jat255) - Wednesday, 13 February 2019, 20:23 GMT+1" on FS#61688 https://bugs.archlinux.org/index.php?do … k_id=61688 ,
I tried with downgrade:
(references:
* Downgrading packages: https://wiki.archlinux.org/index.php/Do … g_packages
* Arch Linux Archive: https://wiki.archlinux.org/index.php/Arch_Linux_Archive )
I downloaded `curl-7.62.0-1-x86_64.pkg.tar.xz{,.sig}` from https://archive.archlinux.org/packages/c/curl/
#installed
$ `pacman -U curl-7.62.0-1-x86_64.pkg.tar.xz`
# restarted NetworkManager
$ systemctl restart NetworkManager
# checked cpu
$ htop
# and cpu seems fine.
#
# Now, I tried to reproduce, so I came back to curl 7.64 :
$ pacman -S curl
resolving dependencies...
looking for conflicting packages...
Packages (1) curl-7.64.0-4
Total Installed Size: 1.56 MiB
Net Upgrade Size: -0.07 MiB
:: Proceed with installation? [Y/n] y
$ systemctl restart NetworkManager
# checking cpu with htop...
$ htop
and still it is fine ! Despite I moved back again to newest curl and restarted NetworkManager, still it's fine.
So to recap: I downgraded curl, restarted NM, issue got resolved, upgraded curl back, restarted NM, and still there is no issue.
(Maybe after restarting PC it may come back -> in case of gathering more data will try to post, in hope of helping debugging this issue)
Offline
This has been seen trying to test the issue
https://bugs.archlinux.org/task/61688#comment177103
https://bugs.archlinux.org/task/61688#comment177108
https://bugs.archlinux.org/task/61688#comment177113
It now needs someone affected to report the issue upstream to curl.
Offline