Currently kwin exposes all kinds of settings as configurable by the user. Most of which you cannot tell the difference, or what they are about. This is because these settings are quite low end, and require deep insight on how kwin code handles composition and specially why. Exposing settings to the user makes sense when it's a matter of aesthetics, or if you cannot decide automatically if they will fit the target system. But low end predictable settings are better decided by the code itself, automatically. For example the same it won't make sense making the user decide if they have an Nvidia or an AMD card to decide the low-latency strategy, as that can be detected automatically, it also doesn't make sense making them decide what API as rendering back-end to use. It's just the highest well supported. If you ask me, I suspect that all settings except "enable composition at startup" don't make sense to an end user. They only make it more likely for them, not to improve, but to worsen their system configuration.
I would ask users if they ever needed to change these settings, to figure out which are truly valuable.
In an ideal world, everything would work perfectly with all graphics hardware automatically and the correct settings could be auto-detected and no user-visible configuration knobs would ever be required. We don't live in that world. :) Occasionally, some users will need to tweak these things. And when such a situation arises, it's much easier to tell them to change something in the GUI than it is to ask them to set an environment variable somewhere. It's not ideal, but the alternative is currently worse, and the ideal state of affairs isn't yet possible.
Those settings where put there in a long gone era of graphics, perhaps 15 years ago. Have you checked that those still apply? For example who would see a difference by changing from opengl 2 to opengl 3? Does this case even exist? And about the scale method: who would want to set it as "crisp", a setting intended for performance on GPUs that won't work well anyway these days under plasma? Would the scale method "accurate" work poorly in any case anymore? Keep in mind that I use four different 12 years old computers, with different graphics, and even then I see no improvement whatsoever by changing to these options. Why do you think it will do for other people? Do you know of any other case? And even if some of those settings where kept, won't be better noticing the user only to change them if they experience problems? For deciding this properly, we should ask people first about how they are using these settings. Cause these sound like keeping 256 colors around.
KWin folks, what do you think?
We probably could change it to be 3 + core profile when available, but given we need the combo box for xrender removing it isn't going to be particularly impactful. Scale method I would have to find out what it actually does before commenting. The latency setting I don't agree with. This is a user-facing trade-off, it can't be determined by hardware.
It also doesn't help that the "Help" button for this KCM presents a "Documentation not found" page in the help center. So even if the user wants to find out what the options do, they have to resort to scouring the web.