Hi All,
We've been running our AWS server (t2.2xlarge) since Oct 2019 - we've had no issue.
We use cloudflare, have imunify360, and KernelCare, our typical load averages are: 0.51 0.51 0.50
Yesterday - we were notified our site was running slow, and logged in to see load averages of 365.55 400.98 411.46
I looked at the process manager to see this -
was using 50% of the CPU%
I then restarted the SQL Server, restarted the server, and upgraded to the v90.0.8 whm release.
I have checked our error_log, our messages file, and our access_log - but I cannot see anything that would have caused the site to crash/hang.
Is there anything else I can try to see what caused the issue?
Like i said - we've been running for close to a year with no problems, so you would think something would stand out in a log -- but we do not know where else to look on the server to find what caused the problem.
Thanks so much!
We've been running our AWS server (t2.2xlarge) since Oct 2019 - we've had no issue.
We use cloudflare, have imunify360, and KernelCare, our typical load averages are: 0.51 0.51 0.50
Yesterday - we were notified our site was running slow, and logged in to see load averages of 365.55 400.98 411.46
I looked at the process manager to see this -
Code:
/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
I then restarted the SQL Server, restarted the server, and upgraded to the v90.0.8 whm release.
I have checked our error_log, our messages file, and our access_log - but I cannot see anything that would have caused the site to crash/hang.
Is there anything else I can try to see what caused the issue?
Like i said - we've been running for close to a year with no problems, so you would think something would stand out in a log -- but we do not know where else to look on the server to find what caused the problem.
Thanks so much!