Article ID: 113756, created on Apr 24, 2012, last review on Jun 17, 2016

  • Applies to:
  • Plesk 12.5 for Linux
  • Plesk 12.0 for Linux
  • Virtuozzo 6.0
  • Virtuozzo containers for Linux
  • Virtuozzo containers for Windows

Basic Checks

Determine the networking mode the container operates in.


  1. See the settings of the Hardware Node's adapter the container is bridged to.

    ipconfig /all
  2. See the network and the network mode.

    vzlist -a -o ctid,nettype,network,ip
  3. See the routing table.

    route print


  1. See the list of container/node interfaces. If there is a vethCTID.x interface, it is bridged.

    vznetcfg if list
  2. See all interfaces.

    ip a l
  3. See the routing table.

    ip r l
  4. See more detailed information about the bridge (brX) interface.

    brctl show

Determine the correctness of the container's network settings


  1. The container IP address, netmask, and default gateway should be set up correctly -- the same way as if you were adding a physical host to the same network.

  2. The router in the LAN segment to which we are adding the bridged container should be able to route packages to the container interface -- that is, there must not be a static ARP table configuration on the router.

  3. Check arp -a.

  4. Always remember the container belongs to the same network segment as the physical adapter on the Hardware Node to which it is bridged.


  1. Make sure the container has the correct p-t-p settings -- the default in the container is via venet0; netmask is or /32.

    ifconfig | grep venet
  2. Make sure there is a static ARP entry on the Hardware Node for the container IP address.

    arp -a | grep IP
  3. Make sure there is a route for the container’s IP (on the node) to route packages via venet0.


    route -n


    route print
  4. If there is not an ARP entry or a route, check if you can do ARPing from the node’s interface container’s IP address. If you see any IP address conflict, it means the IP already is assigned in the same LAN segment. If you cannot do ARPing, it is likely there is a router misconfiguration or Parallels Virtuozzo Containers (PVC) cannot do ARPsend on the default node’s interface.

    arpsend -c 1 -w 1 -D -e IP eth0

In Addition

In case a VE is reachable via ping but its resources are not accessible, make sure there is no IP conflict:

traceroute or tracert can be used to check if the icmp packages are going to the correct host, arping can be used to check the MAC address of a host that answers to pings - for CTs in Host-Routed mode MAC should be the same as hardware node's:

# arping
ARPING from eth0
Unicast reply from [BC:AE:C5:0A:66:86]  0.712ms
Unicast reply from [BC:AE:C5:0A:66:86]  0.756ms

Documents to Refer to

User's Guide

Frequently Used KBs

KB#1601 [Info] Is a VPN client supported inside a VE?

KB#1661 [FIX] Ipconfig shows nothing inside VE

KB#5243 [FIX] VE fails to initialize network during startup

KB#112961 How to create container attached to two different networks

KB#3061 I have more than one network interface (NIC) with IPs from different networks in my server and I want to use IPs of all networks in VEs simultaneously

KB#113053 Bridged network is not accessible from the host-routed and vice versa

KB#1737 [FIX] VE loses network connectivity when switched from host-route to bridged mode networking

KB#1226 How do I get amount of network traffic consumed by a container?

Known Issues


The container network will lose connectivity randomly.

Check whether the bridged and host-routed networks are assigned for the container. For example:


In this case, we need to remove one network and assign a new IP address.

# vzctl set CTID --netif_del venet1 --save
Deleting virtual adapters: veth127.1
Saved parameters for Container 127 


IPv6 inside containers works if the node has an IPv6 address assigned, but does not work if the node only has IPv4. Containers in bridged mode only use the node's NIC when containers in the host-routed mode use the node's routing as well. Therefore, if you set the IPv6 address to the container, but do not set the IPv6 address to the node, there will be no routing for IPv6 in the containers by design.


After rebooting, the server shows the error "Bringing up interface eth1: bnx2 device eth1 does not seem to be present, delaying initialization Absence of 70-persistent-net.rules causes NICs to be randomly assisgned to interfaces:"

This actually is not related to Parallels Virtuozzo Containers. In fact, it is a common problem with udev and NICs that occurs when there are no persistent rules for the NIC-interface assignment.


Occasionally, the ping test fails between containers. Container 1 has an external connection in the host-routed mode and LAN in bridged: Container 2 has the opposite settings.

Search Words

Из контейнера не идет трафик

сеть падает


vps error

reboot /etc/hosts

hardware node inaccessible


network unreachable

No DHCP address

Can't ping outside of the container

can't access the public network from the hostnode hypervisor

Container keeps disconnecting

No routing outside the hardware node

Hardware node showing high paclet drop rating

network connetivity issues


reboot /etc/hosts

Не идет трафик из контейнера

pcs6 bridged no ping

ip ve

network issues

pcs bridge

из контейнера не идет трафик


network doesnt work after migration

Unable to connect to

no network inside container


Destination host unreachable from the containers

site not working

can't connect to virtual node

can't access other container

vps network settings not saved

can not remote

VM fail over not happened during hardware reboot activity

d02f9caf3e11b191a38179103495106f 2897d76d56d2010f4e3a28f864d69223 e8e50b42231236b82df27684e7ec0beb 965b49118115a610e93635d21c5694a8 0dd5b9380c7d4884d77587f3eb0fa8ef c62e8726973f80975db0531f1ed5c6a2 a914db3fdc7a53ddcfd1b2db8f5a1b9c 56797cefb1efc9130f7c48a7d1db0f0c 29d1e90fd304f01e6420fbe60f66f838 742559b1631652fadd74764ae8be475e e335d9adf7edffca6a8af8039031a4c7 2a5151f57629129e26ff206d171fbb5f

Email subscription for changes to this article
Save as PDF