Article ID: 118693, created on Nov 17, 2013, last review on May 11, 2014

  • Applies to:
  • Operations Automation
  • Panels
  • Virtuozzo
  • Virtuozzo containers for Linux
  • Virtuozzo hypervisor


The following messages appear in /var/log/messages on the hardware node:

[2814258.430156] TCP: time wait bucket table overflow (CT101) 
[2814258.449661] TCP: time wait bucket table overflow (CT101) 
[2814258.450743] TCP: time wait bucket table overflow (CT101) 
[2814258.475297] TCP: time wait bucket table overflow (CT101)

What do they mean and how to get rid of them?


These messages mean that TW buckets (TCP sockets in the TIME_WAIT state) hit their limit for the kernel memory.

The amount of current TIME_WAIT connections (tw_count) can be found with netstat utility:

[root@vz ~]# netstat -antp | grep TIME_WAIT
tcp        0      0                 TIME_WAIT   -
tcp        0      0                 TIME_WAIT   -
tcp        0      0                 TIME_WAIT   -
tcp        0      0 ::ffff:      ::ffff:         TIME_WAIT   -

[root@vz ~]# netstat -antp | grep TIME_WAIT | wc -l

The problem may appear in one of the following cases:

1) Per-system tw_count is greater than per-system max_tw_buckets limit:

    [root@vz ~]# sysctl -a | grep tcp_max_tw_buckets
    net.ipv4.tcp_max_tw_buckets = 262144

The limit is quite large, so hitting it is quite unlikely. Still this sysctl can be increased if necessary.

2) Per-VE counter is greater than per-VE max_tw_buckets limit:

    [root@vz ~]# sysctl -a | grep tcp_max_tw_buckets_ub
    net.ipv4.tcp_max_tw_buckets_ub = 16536

Increase the sysctl to get rid of the issue.

3) Inside a VE, tw_buckets eat too much memory (greater than allowed fraction of kmemsize)"

    [root@vz ~]# vzctl exec 101 sysctl -a 2>/dev/null| grep net.ipv4.tcp_max_tw_kmem_fraction
    net.ipv4.tcp_max_tw_kmem_fraction = 384

384 means 38.4% of kmemsize

In this case, there will be no failed counters registered for the container, so it's quite hard to trace it down. Try to increase kmemsize parameter for the container and see if the messages are gone.

4) kmemsize shortage inside VE - in this case there should be new failcounters registered for kmemsize parameter. Find out if it's true:

    [root@vz ~]# awk '$6' /proc/bc/101/resources
                kmemsize                 16587096             18132992             24299200             26429120                    126

The last column shows the number of times the parameter has been exceeded. If the limit is hit - increase kmemsize parameter for the container.

Search Words


TCP: time wait bucket table overflow



a26b38f94253cdfbf1028d72cf3a498b 2897d76d56d2010f4e3a28f864d69223 d02f9caf3e11b191a38179103495106f e8e50b42231236b82df27684e7ec0beb 5356b422f65bdad1c3e9edca5d74a1ae caea8340e2d186a540518d08602aa065 0dd5b9380c7d4884d77587f3eb0fa8ef 614fd0b754f34d5efe9627f2057b8642 56797cefb1efc9130f7c48a7d1db0f0c

Email subscription for changes to this article
Save as PDF