Version: 2.1.1 (using KDE 4.3.0) OS: Linux Installed from: Ubuntu Packages Collection folder resides on nfs: server:/home/fileserver/sound -> mounted to local client. $HOME is also mounted from server. The coletion data is continusely trashed on multiple logging in from more than one client machine. If logged in many times from pc1, pc2, pc3 ... pc# (may occour in company installations), amarok trashes here the already build collection to Zero and starts an rebuild, which causes critical high load load on the server. Also the cpu usage on the clients is raising to the upper limit, until cras, if a ulimit is set. At this behavior is amarok unusable since it blocks the usage on the local client totally.
Jürgen, I do not really understand your report, what do you mean by "trashing"? AFAIU, you are not talking about a crash of Amarok, right?
Jeff, do you have an idea here?
Probably when a new client logs in, it resets the mtime on the root folder of the collection, triggering a "full" rebuild. You can check this by using "stat" on the folder. If this is the case, you should fix your NFS setup.
To make the problem more clear: The NFS Server server:/home/fileserver/sound holds somewhat 450.000+ files for amarok. eportoptions, The User Homes are standard NFS server:/home The clients pc2, pc3, pc4, pc# are mounting server:/home to /home. The clients are mounting also /home/fileserver/sound from server. If I log on client pc2 using my (nis based) login, amarok starts up fine in Autostart. If I do a second log on in client pc3 amarok is destroying the complete collection database and all clients, where I log on are going to build a new collection database by scanning a NFS server holding 1 TB of files. This is nearly a selfmade critical DDOS attack to the NFS server causing gigantic loads and the server is going in highload, I could mention server loads greater than 48+ on the server, caused by 4 parralell amarok collection scans. Another Problem in this context, is if a collection partition is not mounted, amarok is forgetting and dumping all content of the collection. So, my questions: 1. why is amarok forgetting in case of a temporarely not avaible mount ? [bug, critical] 2. why is amarok doing such idiotic paralell scans and rebuilding the collection again and again and again ? That is not aceptable in any circumstances. During this behavior amarok2 is banned here. Amarok 1 war running much more better and stable, using a central serverbased collection database. The mysql Database collection ist IMHO broken by design. Mediamonkey runs much better in wine, than amarok natively. Have a view to the mediamonkey, historical it is a amarok clone. Nowadays it works simply better.
NFS Server export options: # /etc/exports /home 192.168.11.0/24(async,rw,no_root_squash,no_subtree_check) 127.0.0.1/32(async,no_root_squash,no_subtree_check) /home/fileserver/sound 192.168.11.0/24(rw,async,no_root_squash,no_subtree_check) 127.0.0.1/32(async,no_root_squash,no_subtree_check) Client mount options: #/etc/fstab server:/home /home nfs defaults 0 0 server:/home/fileserver/sound /home/fileserver/sound nfs defaults 0 0 Server is Ubuntu Hardy 8.04.03 LTS, running as kvm guest within a Debian Lenny server. Clients are various, Lenny, karmic, hardy. The special problem here occours out of Karmic clients, running amarok2. The elder clients running lenny and hardy systems are using a central mysql database for more users the same db also resided on the server.
If you're going to ignore Comment #3, then I can't help you. You need to check and see if the mtimes are being updated when new users connect. That's the only thing I can think of that would trigger full rescans repeatedly.
Comment #3 was not ignored. This is not the point. The stat and the mtime is not changed.
Sure it was, because I asked if mtimes were being changed, and you failed to provide an answer.
I'm sorry, but I have trouble understanding this report. I think it's just too fuzzy to be useful to us. Additionally, the report is about the outdated 2.1.1 version. Please upgrade to 2.2.0, or even better, 2.2.1, which will come out soon. Then please test again.