It would appear that both mmcache and eaccelerator complain when setup under a suphp implementation for PHP and apache. Seems like most people recommend using fastcgi+suexec and then a php accelerator, but I’m uncertain how that will turn out as of right now. Another option on a per user basis would be to use Alternative PHP Cache (APC )
The problem arises when a user attempts to make an anonymous FTP connection to Cpanel user’s account who has already enabled anonymous FTP connections in their control panel. However, pure-ftpd drops you with the error “421 Can’t change directory to /var/ftp/”.
workstation:~ user$ ftp testing.com Connected to testing.com. 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 3 of 50 allowed. 220-Local time is now 11:02. Server port: 21. 220-IPv6 connections are also welcome on this server. 220 You will be disconnected after 15 minutes of inactivity. Name (testing.com:user): anonymous 421 Can't change directory to /var/ftp/ [/] ftp: Login failed.
1. Use email@example.com and any password instead of just anonymous
2. Assign the Cpanel user a dedicated IP address where FTP logins with just “anonymous” will work.
Today I noticed a bunch of errors in the dovecot log stating “[Dovecot] – Inotify instance limit for user exceeded, disabling”. Apparently Inotify is used to notify
the client immediately after new message has been received. This typically happens after the IMAP client uses the IDLE command.
[root@SERVER ~]# cat /proc/sys/fs/inotify/max_user_instances 128 [root@SERVER ~]# echo "256" > /proc/sys/fs/inotify/max_user_instances [root@SERVER ~]#
Dovecot shouldn’t need to be restarted, but you can choose to do so…. or can you?
BackupPC is terrible at removing old hosts where backups are no longer needed. Over time, it becomes necessary to get old servers out of your BackupPC “Host” list drop-down. You might find yourself in a situation where it becomes necessary or even easier to remove the config .pl files from your pc/ directory, and then just recreate your hosts file. Use the following steps to accomplish just that:
[root@backupserver pc]# for i in `ls /etc/BackupPC/pc | sed ‘s/.pl//g’`; do grep $i /etc/BackupPC/hosts; done > /etc/BackupPC/hosts-NEW
[root@backupserver pc]# cd /etc/BackupPC
[root@backupserver BackupPC]# mv hosts hosts.BAK
[root@backupserver BackupPC]# mv hosts-NEW hosts
[root@backupserver BackupPC]# chown apache:apache hosts
[root@backupserver BackupPC]# /etc/rc.d/init.d/backuppc restart
Shutting down BackupPC: [ OK ]
Starting BackupPC: [ OK ]
We have had issues with Drupal sending log information to syslog, which conicidently gets copied to the console for ALL users to see. Needless to say, this is not acceptable in a shared webhosting environment.
To disable the syslog module using the MySQL command line, run the following SELECT to look at the state of your data before the change. This will also help you to find the full name and enabled/disabled status of the module too:
SELECT name,status FROM system WHERE type='module';
Then to disable your syslog module, set the status to 0 for the module name that you want to disable:
UPDATE system SET status='0' WHERE name='syslog';
Check your handiwork using the SELECT statement again. Hope this helps someone out there.
/etc/init.d/httpd stop mv /usr/local/apache/conf/httpd.conf /usr/local/apache/conf/httpd.conf-notworking cp -a OLDHTTPD.CONF /usr/local/apache/conf/ mv /var/cpanel/userdata /var/cpanel/userdata-BAK /usr/local/cpanel/bin/userdata_update cp -a /var/cpanel/userdata /usr/local/apache/conf /etc/init.d/httpd start
Run the /usr/local/cpanel/bin/apache_conf_distiller –update to ensure the main_domain key errors are gone.
On servers where a catchall has been setup using the qmailadmin web interface, it is possible for postmasters to delete an email account that was setup as a catchall account (in this case; firstname.lastname@example.org). When this occurs, the .qmail-default file is not updated. Assuming said catchall was already designated prior to removal, mail is accepted for email@example.com and messages are deferred when delivery to /home/vpopmail/domains/P/earthwire.com/johnsmith is attempted. This can cause the qmail queue to grow over time when you have many users with this scenario.
A problem /home/vpopmail/domains/P/example.com/.qmail-default file:
| /home/vpopmail/bin/vdelivermail '' /home/vpopmail/domains/P/earthwire.com/johnsmith
What a nonexistent qmail user directory deferral looks like;
/var/log/qmail/current: @400000004b72cd931e35f0ac delivery 5362496: deferral: client_connect:_connect_failed:_2/user_does_not_exist,_but_will_deliver_to_/home/vpopmail/domains/P/example.com/johnsmith//can_not_open_new_email_file_errno=2_file=/home/vpopmail/domains/P/example.com/johnsmith/Maildir/tmp/1265814921.29239.qmailserver.maildomain.com,S=3149/system_error/
A quick solution(s):
Find cronjobs that were modified recently:
[root@SERVER cron]# find /var/spool/cron -type f -mtime -3 | xargs ls -al
After commenting out suspect lines in the listed user’s crontabs, you can dump the process list to a file every 5 seconds or so with:
[root@SERVER ~]# touch /root/ps-list.txt [root@SERVER ~]# watch -n 5 "ps aux >> /root/ps-list.txt"
If the server crashes, you can then review the last few lines of /root/ps-list.txt to see which processes appear to be overwhelming the server.
Look at mdstat to see if a partition has been dropped from the array:
root@SERVER [~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1 sda1 521984 blocks [2/2] [UU] md1 : active raid1 sda3 483668864 blocks [2/1] [U_]
The [U_] shows that sdb3 is out of the array md1. To add /dev/sdb3 back into the array, we do the following:
root@reseller10 [~]# mdadm /dev/md1 -a /dev/sdb3 mdadm: re-added /dev/sdb3 root@SERVER [~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1 sda1 521984 blocks [2/2] [UU]
md1 : active raid1 sdb3 sda3 483668864 blocks [2/1] [U_] [>....................] recovery = 0.0% (2432/483668864) finish=6448.8min speed=1216K/sec
Running: echo 100000 > /proc/sys/dev/raid/speed_limit_min will speed up the software raid rebuild process:
root@SERVER [~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1 sda1 521984 blocks [2/2] [UU] md1 : active raid1 sdb3 sda3 483668864 blocks [2/1] [U_] [==>..................] recovery = 11.0% (53583104/483668864) finish=1030.4min speed=6954K/sec unused devices: