samedi 27 juin 2015

Threaded Python 3 proxy checker is massively spiking the load average on AWS EC2 instance

This is very peculiar...

I have written a threaded proxy checker in Python 3 using pycurl.

It works great, and chugs along sipping cpu cycles averaging at most 30% of CPU and 2% memory when viewed via the top command.

Until it doesn't...

The script keeps running and doing it's job, and the server is still responsive, but occasionally the load average as seen via the top command will quickly spike to 75+ for seemingly no reason, and then quickly drop down to 0.20-0.30.

wait and steal % never push above 1%, and the python3 script never goes above 30% CPU, either.

What could be causing the CPU load average to spike so high and then quickly drop back down?

It's certainly the python3 script causing the load average spikes because it never happens when the script isn't running, and I have watched the top command on this server for days if not weeks in the past few months.

Does EC2 simulate the load average differently that other servers, perhaps?

Usually when the load average spikes above say 5.0 for more than a few minutes... as it has in the past when I was writing to MySQL too frequently... everything stops. Wait and steal% go through the roof, and I can barely even log into the server via SSH.

This load average seems fake to me, could it be?

I have copied the results of a top command below to show you how strange this looks:

top - 23:43:58 up 4 days,  6:51,  2 users,  load average: 11.79, 4.12, 2.61
Tasks: 116 total,   2 running, 114 sleeping,   0 stopped,   0 zombie
%Cpu(s): 34.2 us,  0.7 sy,  0.0 ni, 64.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.3 st
KiB Mem:   2048532 total,  1484256 used,   564276 free,   139420 buffers
KiB Swap:        0 total,        0 used,        0 free.   674300 cached Mem

0.0 wait, virtually 0 steal%, 64% idle CPU and a load average through the roof... how can this be?

Aucun commentaire:

Enregistrer un commentaire