You are not logged in.
Pages: 1
Just curious - why don't more packages take advantage of systemd's sandboxing mechanisms?
E.g., for one of mine, I use:
# Sandbox
NotifyAccess=main
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=tmpfs
BindReadOnlyPaths= [ paths that the app needs access to ]
DevicePolicy=closed
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
# Resource Limits
CPUQuota=50%
MemoryLow=40M
MemoryHigh=50M
MemoryMax=60M
MemorySwapMax=60M
Offline
Offline
A well written service or daemon should chroot and drop privileges itself anyways. What does systemd sandboxing add to this? How would it work with programs that should be able to work with various files (e.g., most programs). Why would I want to limit CPU use of a program I chose to run? If I want to run a CPU-intensive number crunching process, I want it to use my CPU. If programs use CPU resources for no reason there is a much better solution than sandboxing: pacman -Rsn $bad_program.
General or global sandboxing sounds more like nannying.
Last edited by Trilby (2018-12-09 22:48:45)
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
Looking at the packages I use as a sample, a lot of service files are provided by arch, not upstream.
E.g., apache, prosody, varnish, nginx.
There are certainly some cases where a service needs greater access, or where coming up with universal defaults for this is difficult (or impossible), but it seems to me that there are some baseline things that can be done.
Offline
Trilby - that's true, but defence in depth is a valid way to assure security.
Offline
Pages: 1