This post is part of my VMware VCIX-NV Study Guide and details the possibilities to use the command line interface on all sections of NSX.

Documentation

Index

 

Command Line Interface on NSX
Traditional network infrastructure is managed via command line. Every network administrator out there is glued to the command line for configuration, troubleshooting and setup tasks. Even though this is changing by all kinds of automation tools out there that present a GUI (ACI, APIC-EM, NSX, OpenFlow GUIs, etc), most of them retain some form of command line interface (CLI). VMware NSX is no different.

NSX offers command line interfaces on the following:

  • NSX Manager
  • NSX Controller
  • NSX Edge Gateway Services appliance
  • NSX Logical Distributed Router
  • ESXi host (existing, but added functionality for NSX)

There are way too many useful commands to cover them as a whole, so I recommend fully to read the entire (yes entire!@#) NSX Command Line Interface Reference. As for me, as usual the topics are laid out from the VCIX-NV blueprint and I’ll be share the most useful commands there are, to kickstart the process.

If you’re not a network administrator, please keep in mind that the console prompt displayed like this device> is called exec mode, which you get into by default when logging into a device. The console prompt displayed like device# is enabled mode, which you can get to by using the enable command and entering your password again. The examples below are displayed in both exec and enabled mode, it’s up to you to get there.

You should also check out the brilliant post by Sébastien Braun on NSX vSphere troubleshooting, as it has a lot of useful commands.

 

Manage and report on an NSX installation status using ESXi Command Line Interface (CLI) commands &

Manage and report on an NSX Infrastructure using NSX Manager, NSX Controller, and ESXi CLI commands

We’re going to run through the NSX components top to bottom and have a look at some CLI outputs and change some stuff. While I’m going through, I’m going to assume that by know you know to login via SSH onto the NSX component that is referenced. 🙂

NSX Manager
You do not have the entire functionality of the NSX Manager at your disposal via the CLI. There does not seem to be a way to manage the integration with vCenter and SSO, among others. Lets start by looking at the basic system configuration:

nsx-manager# show running-config
Building configuration...

Current configuration:
!
ntp server nl.pool.ntp.org
!
ip name server 8.8.8.8
!
hostname nsx-manager
!
interface mgmt
 ip address 10.192.123.80/24
!
ip route 0.0.0.0/0 10.192.123.1
!
web-manager
nsx-manager#

You need to be in enabled mode to show or modify anything configuration related. Somewhat like a regular switch, the output shows the DNS, NTP and IP configuration of the NSX Manager. If you’d like to change something, lets say the IP address of the NSX Manager, first go into configuration mode and set a new IP address:

nsx-manager# configure  terminal
nsx-manager(config)# interface  mgmt
nsx-manager(config-if)# ip address  10.192.123.90/24

Not sure if this remark is needed, but if you do this, you’ll lose your connection and you’ll need to set up the SSH connection again. With any other switch-type interface, you need to save the changes to the startup configuration file to save it permanently.

nsx-manager# write  memory
Building Configuration...
Configuration saved.
[OK]
nsx-manager#

Using the CLI is a good way to get detailed logging from the NSX Manager:

nsx-manager> show manager log reverse
2015-01-17 19:04:45.384 CET  INFO pool-23-thread-1 EndpointConfigurationManagerImpl:812 - scheduleConfigurationUpdateCheck: scheduling update check in 60 seconds.
2015-01-17 19:04:45.384 CET  INFO pool-23-thread-1 EndpointConfigurationManagerImpl:1174 - No USVM is defined for host host-16 - will retry
2015-01-17 19:04:45.384 CET  INFO pool-23-thread-1 EndpointConfigurationManagerImpl:1482 - USVM is not configured for host host-16

NSX Controller
The NSX Controllers are a Linux-based appliance that contains all information of the virtual network, it retains and controls the active state of your network. Edges are registered here, along with their interfaces and routes. When requesting information from a controller cluster, make sure you connect to the master.

Lets start by getting the controller cluster state:

nsx-controller # show control-cluster status
Type               Status                                      Since
--------------------------------------------------------------------------------
Join status:       Join complete                               01/16 15:06:25
Majority status:   Connected to cluster majority               01/17 06:22:37
Restart status:    This controller can be safely restarted     01/17 06:22:27
Cluster ID:        95e35052-cd43-4688-a8b3-910ce0fd50d7
Node UUID:         aadd57da-0d39-4377-a011-5abfd0620ebf

Role Configured status Active status
--------------------------------------------------------------------------------
api_provider enabled activated
persistence_server enabled activated
switch_manager enabled activated
logical_manager enabled activated
directory_server enabled activated
nsx-controller #

Or by getting a list of all deployed NSX Edges and showing the connected interfaces from one of them:

nsx-controller # show control-cluster logical-routers instance all
LR-Id        LR-Name            Hosts[]          Edge-Connection    Service-Controller
0x570d4551   default+edge-5                                         10.192.123.82
0x570d4552   default+edge-4     10.192.123.104                      10.192.123.81

The LR-Id is the internal ID assigned to the NSX Edge by the controllers. You’ll need that ID to get more information about a specific Edge, like getting the interface overview:

nsx-controller # show control-cluster logical-routers interface-summary 0x570d4551
Interface           Type    Id       IP[]
570d45510000000b    vxlan   0x138c   2.2.2.2/24
570d455100000002    vlan    0x1bc    192.168.99.10/24
570d45510000000a    vxlan   0x138b   1.1.1.1/24

This NSX Edge has 3 interfaces, 1 traditional VLAN interface and 2 new and shiny VXLAN interfaces. Lets get some more intel on the interface with IP 2.2.2.2/24:

nsx-controller # show control-cluster logical-routers interface 0x570d4551 570d45510000000b

Interface-Name:     570d45510000000b
Logical-Router-Id:  0x570d4551
Id:                 0x138c
Type:               vxlan
IP:                 2.2.2.2/24
DVS-UUID:           78dc0350-3773-5831-1aaf-f8b44c18decd
Mac:                02:50:56:83:3d:66
Mtu:                1500
Multicast-IP:       0.0.0.1
Designated-IP:
Flags:              0x280
Bridge-Id:
Bridge-Name:
DHCP-relay-server:

Did you notice that there was another internal ID used (the interface ID) to get the detailed information? The controllers are full with these internal IDs. They’re always the first column, so they’re pretty easy to spot, but you need to reference them, keep that in mind.

NSX Edge Services Gateway
Of the two types of NSX Edges, the ESG and LDR. When you’re on the CLI, you don’t see a lot of difference, apart from the ESG having a lot more commands (simply because it has more functionality). The following examples are mostly applicable to both, unless otherwise is mentioned. The CLI of both Edge types are meant for showing information and debugging traffic flow, there is no real configuration possible.

Lets get a list of interfaces to start. NSX Edges are deployed with a number of interfaces, whether or not you decide to use them. This means the output of getting the interfaces can be huge, even though you thought you just configured 2 interfaces.

vShield-edge-2-0> show interface
Interface VDR is up, line protocol is up    
  index 2 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,NOARP>
...snip...
Interface br-sub is up, line protocol is up
...snip...
Interface lo is up, line protocol is up
...snip...
Interface vNic_0 is up, line protocol is up   
  index 3 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,MULTICAST>    
  HWaddr: 00:50:56:83:48:7b
  inet6 fe80::250:56ff:fe83:487b/64
  inet 10.192.123.88/24
  proxy_arp: disabled
  Auto-duplex (Full), Auto-speed (2191Mb/s)
    input packets 116733, bytes 7800485, dropped 9378, multicast packets 6    
    input errors 0, length 0, overrun 0, CRC 0, frame 0, fifo 0, missed 0   
    output packets 21624, bytes 2151234, dropped 0   
    output errors 0, aborted 0, carrier 0, fifo 0, heartbeat 0, window 0   
    collisions 0
Interface vNic_4 is up, line protocol is up   
  index 4 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,MULTICAST>   
  HWaddr: 00:50:56:83:dc:35   
  inet 10.1.5.12/24
...snip...
Interface vNic_8 is down
...snip...
Interface vNic_1 is up, line protocol is up
...etc...

Regular interfaces are called vNic_X, but you’ll notice a few other interfaces listed in the output. The VDR interface is the Virtual Distributed Router (or LDR), the br-sub interface is the Layer 2 VPN tunnel interface and the lo interface is a loopback interface which can be used in routing protocol configuration (just like regular routers).

We got the interfaces, lets see if there are any ARP entries for connected hosts:

vShield-edge-2-0> show arp
-----------------------------------------------------------------------
vShield Edge ARP Cache:
IP Address              Interface    MAC Address          State
10.192.123.1            vNic_0       00:00:0c:9f:f0:59    REACHABLE
192.168.1.200           vNic_1       00:50:56:83:b3:df    REACHABLE

What about the routing table which the Edge uses to make routing decisions?

vShield-edge-2-0>   show ip route

Codes: O - OSPF derived, i - IS-IS derived, B - BGP derived,
C - connected, S - static, L1 - IS-IS level-1, L2 - IS-IS level-2,
IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2,
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2

Total number of routes: 7

S      0.0.0.0/0          [1/1]     via 10.192.123.1
C      1.1.1.0/24         [0/0]     via 1.1.1.1
C      2.2.2.0/24         [0/0]     via 2.2.2.1
C      10.1.5.0/24        [0/0]     via 10.1.5.12
C      10.192.123.0/24    [0/0]     via 10.192.123.88
C      192.168.1.0/24     [0/0]     via 192.168.1.1
C      192.168.99.0/24    [0/0]     via 192.168.99.1

One particularly handy command on the ESG is to show the firewall flows to show top network consumers:

vShield-edge-2-0> show firewall flows topN 2
...snip...
Chain usr_rules (2 references)
rid    pkts bytes target     prot opt in     out     source               destination
131073   288 19902 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0
------ flow info for rule 131073 ------
1: icmp     1 9 src=192.168.1.200 dst=74.125.71.139 type=8 code=0 id=37382 pkts=20757 bytes=1743588 src=74.125.71.139 dst=10.192.123.88 type=0 code=0 id=37382 pkts=20757 bytes=1743588 mark=2048 rid=131073 use=1
2: tcp      6 3599 ESTABLISHED src=10.192.120.150 dst=10.192.123.88 sport=1590 dport=22 pkts=1166 bytes=98156 src=10.192.123.88 dst=10.192.120.150 sport=22 dport=1590 pkts=1031 bytes=152413 [ASSURED] mark=2048 rid=131073 use=1

 

Manage and report on a Logical Switch using NSX Controller and ESXi CLI commands

Using the NSX controllers and ESXi host, you’re able to retrieve a great deal of information about a logical switch. Unfortunately, there does not seem to be a way to get a list of all logical switches created, so you’re going to have to lookup the switch ID through the GUI (look for the Segment ID). Once you get the switch ID (or VNI from here and forward), you can continue to the CLI and find this logical switchs’ dirty laundry:

nsx-controller# show control-cluster logical-switches vni 5001
VNI     Controller      BUM-Replication   ARP-Proxy    Connections    VTEPs
5001    10.192.123.82   Enabled           Enabled      2              2

This summary shows the assigned controller and amount of VTEPs that are connected to this logical switch. Lets move on to finding out which ESXi hosts (VTEPs) are connected:

nsx-controller# show control-cluster logical-switches connection-table 5001
Host-IP            Port  ID
192.168.99.104     30969 2
192.168.90.103     12127 3

Now for the connected virtual machine MAC addresses on a certain ESXi host:

show control-cluster logical-switches mac-records 

ESXi Host
Inside an ESXi host you’ve been able to retrieve information using the esxcli command for a while now. The network directive inside the esxcli has been expanded to contain vxlan information, which translates to useful NSX information. For instance, from an ESXi Host, you can get a list of logical switches:

~ # esxcli network vswitch dvs vmware vxlan network list --vds-name=dvSwitch
VXLAN ID  Multicast IP               Control Plane  Controller Connection  Port Count  MAC Entry Count  ARP Entry Count
--------  -------------------------  -------------  ---------------------  ----------  ---------------  ---------------
    5001      N/A (headend replication)  Enabled ()     10.192.123.81 (up)              1                1                0
    5004      N/A (headend replication)  Enabled ()     10.192.123.81 (up)              1                3                0
    5003      N/A (headend replication)  Enabled ()     10.192.123.82 (up)              1                2                0
    5002      N/A (headend replication)  Enabled ()     10.192.123.82 (up)              1                1                0

From the output above, you can get the VXLAN ID (or Segment ID, or Logical Switch ID), which kind of VXLAN replication is used (above is unicast, which is why there is no multicast IP specified), which controller is assigned to the logical switch and the ports, mac and arp counts within that logical switch.

Taking one step back, we can also show the distributed vSwitch which is handling our VXLAN traffic:

~ # esxcli network vswitch dvs vmware vxlan list
VDS ID                                          VDS Name   MTU  Segment ID   Gateway IP   Gateway MAC       Network Count  Vmknic Count
-----------------------------------------------  --------  ----  ------------  ------------  -----------------  -------------  ------------
78 dc 03 50 37 73 58 31-1a af f8 b4 4c 18 de cd  DSwitch   1600  192.168.99.0  192.168.99.1  ff:ff:ff:ff:ff:ff             3     1

You can also get the remote MAC addresses, which are pushed from the controllers to the ESXi host:

~ # esxcli network vswitch dvs vmware vxlan network mac list –-vds-name=DSwitch --vxlan-id=5001
Inner MAC         Outer MAC         Outer IP       Flags
-----------------  -----------------  --------------  --------
00:50:56:b2:b2:c1  00:50:56:61:23:00  192.168.99.104  00000111
00:50:56:82:a2:21  ff:ff:ff:ff:ff:ff  192.168.99.103  00001111

And as my last example, you can also get a list of ESXi hosts that are linked to a certain logical switch and have VXLAN tunnels set up:

~ # esxcli network vswitch dvs vmware vxlan network vtep list --vds-name=DSwitch --vxlan-id=5001
IP               Segment ID     Is MTEP
--------------   -------------  -------
192.168.99.103   192.168.99.0   true
192.168.99.104   192.168.99.0   false

 

Manage and report on a Logical Router NSX Controller, NSX Edge, and ESXi CLI commands

We’ve already covered retrieving logical router information from the NSX Controller in the first topic on this page, so we’re going to go straight to the NSX Edge itself.

There are a few commands that you might use in an operational sense. Most of them we’ve already covered in other topics, so I’m going to just list them:

Show attached hosts:

vShield-edge-11-0> show arp
-----------------------------------------------------------------------
vShield Edge ARP Cache:
IP Address     Interface    MAC Address         State
10.192.123.1   vNic_0       00:00:0c:9f:f0:59   REACHABLE
vShield-edge-11-0>

Show installed routing table:

vShield-edge-11-0> show ip route

Codes: O - OSPF derived, i - IS-IS derived, B - BGP derived,
C - connected, S - static, L1 - IS-IS level-1, L2 - IS-IS level-2,
IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2,
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2

Total number of routes: 4

S     0.0.0.0/0          [1/1]     via 10.192.123.1
C     10.192.123.0/24    [0/0]     via 10.192.123.78
C     192.168.1.0/24     [0/0]     via 192.168.1.1
C     192.168.2.0/24     [0/0]     via 192.168.2.1

Show attached interfaces:

vShield-edge-11-0> show interface
Interface VDR is up, line protocol is up   
  index 2 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,NOARP>   
  HWaddr: 96:b4:01:91:36:1b   
  inet6 fe80::94b4:1ff:fe91:361b/64   
  inet 192.168.1.1/24
...snip...

Show current network flows:

vShield-edge-11-0> show firewall flows
Chain PREROUTING (policy ACCEPT 26240 packets, 3473K bytes)
rid     pkts     bytes     target    prot    opt    in     out    source               destination

Chain INPUT (policy ACCEPT 25689 packets, 3406K bytes)
rid     pkts     bytes     target    prot    opt    in     out    source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
rid     pkts     bytes     target    prot    opt    in     out    source               destination

Chain OUTPUT (policy ACCEPT 24301 packets, 3356K bytes)
rid     pkts     bytes     target    prot    opt    in     out    source               destination

Chain POSTROUTING (policy ACCEPT 24301 packets, 3356K bytes)
rid     pkts     bytes     target    prot    opt    in     out    source               destination
vShield-edge-11-0>

ESXi Host Commands
As the distributed router is embedded in the ESXi kernel, the ESXi host needs to know about its configuration in other to handle local routing traffic. This means that the routing info that the LDR Control VM collects, is pushed down to the ESXi host, so it can make informed decisions on where to send traffic.

VMware has extended the CLI on ESXi so you can pull that information out. I’ll walk you through some of the most used ones. One thing to note is that ‘LDR’ (logical distributed router) in terminology translated to VDR (virtual distributed router) inside ESXi. Lets start with getting an overview of the logical router instances that are located on the host:

~ # net-vdr --instance -l

VDR Instance Information :
---------------------------

Vdr Name:               default+edge-11
Vdr Id:                 1969527526
Number of Lifs:         5
Number of Routes:       4
State:                  Enabled
Controller IP:          10.192.123.81
Control Plane IP:       10.192.123.103
Control Plane Active:   Yes
Num unique nexthops:    1
Generation Number:      0
Edge Active:            Yes

Once you get the edge identifier, you can look up the interfaces of this NSX Edge, or the logical interfaces (lif):

~ # net-vdr --lif -l default+edge-11

VDR default+edge-11 LIF Information :

Name:                75649ae600000002
Mode:                Routing, Distributed, Uplink
Id:                  Vlan:            495
Ip(Mask):            10.192.123.77(255.255.255.0)
Connected Dvs:       DSwitch
Designated Instance: Yes
DI IP:               10.192.123.103
State:               Enabled
Flags:               0x8
DHCP Relay:          Not enabled

Name:                75649ae60000000b
Mode:                Routing, Distributed, Internal
Id:                  Vlan:            444
Ip(Mask):            192.168.2.1(255.255.255.0)
Connected Dvs:       DSwitch
Designated Instance: Yes
DI IP:               10.192.123.103
State:               Enabled
Flags:               0x88
DHCP Relay:          Not enabled

Name:                75649ae60000000a
Mode:                Routing, Distributed, Internal
Id:                  Vlan:            333
Ip(Mask):            192.168.1.1(255.255.255.0)
Connected Dvs:       DSwitch
Designated Instance: Yes
DI IP:               10.192.123.103
State:               Enabled
Flags:               0x88
DHCP Relay:          Not enabled

As mentioned, the ESXi host knows about the routes the LDR has, so it can make the informed decision on where to route the traffic. If it can do a local route, it will. To verify the network routes installed in the LDR, you can use the following command:

~ # net-vdr --route -l default+edge-11

VDR default+edge-11 Route Table
Legend: [U: Up], [G: Gateway], [C: Connected], [I: Interface]
Legend: [H: Host], [F: Soft Flush] [!: Reject] [E: ECMP]

Destination      GenMask          Gateway          Flags    Ref Origin   UpTime     Interface
-----------      -------          -------          -----    --- ------   ------     ---------
0.0.0.0          0.0.0.0          10.192.123.1     UG       1   AUTO     120        75649ae600000002
10.192.123.0     255.255.255.0    0.0.0.0          UCI      1   MANUAL   126        75649ae600000002
192.168.1.0      255.255.255.0    0.0.0.0          UCI      1   MANUAL   126        75649ae60000000a
192.168.2.0      255.255.255.0    0.0.0.0          UCI      1   MANUAL   126        75649ae60000000b

Now you have the most important information for the routing of the logical distributed router inside the ESXi kernel. There’s another functionality that the LDR can perform; bridging of VXLAN and VLAN networks. We talked about how it works in previous chapters, now lets see how we can get the information about the bridge settings inside the ESXi host.

To get an overview of existing bridges, you can execute the following command:

~ # net-vdr --bridge -l default+edge-11

VDR 'default+edge-11' bridge 'zulu' config :

Bridge config:
Name:id             zulu:1
Portset name:
DVS name:           DSwitch
Ref count:          1
Number of networks: 2
Number of uplinks:  0

    Network 'vxlan-5001-type-bridging' config:
   Ref count:          1     
   Network type:       1   
   VLAN ID:            0
   VXLAN ID:           5001   
   Ageing time:        300   
   Fdb entry hold time:1   
   FRP filter enable:  1   
  
      Network port ID '0x4000029' config:   
      Ref count:          1   
      Port ID:            0x4000029   
      VLAN ID:            4095
      IOChains installed: 0    
 
   Network 'vlan-23-type-bridging' config:   
   Ref count:          1   
   Network type:       1   
   VLAN ID:            23   
   VXLAN ID:           0   
   Ageing time:        300   
   Fdb entry hold time:1   
   FRP filter enable:  1    

      Network port ID '0x4000029' config:   
      Ref count:          1
      Port ID:            0x4000029     
      VLAN ID:            4095   
      IOChains installed: 0

From this output you can see that there is a bridge called ‘zulu’, that it has 2 networks attached and that those networks are VXLAN 5001 and VLAN 23. For troubleshooting purposes, you can also display the learned MAC addresses on both sides:

~ # net-vdr --mac-address-table -b default+edge-11

VDR 'default+edge-11' bridge 'zulu' mac address tables :


Network 'vxlan-5001-type-bridging' MAC address table:
total number of MAC addresses:    0
number of MAC addresses returned: 0
Destination Address  Address Type  VLAN ID  VXLAN ID  Destination Port  Age
-------------------  ------------  -------  --------  ----------------  ---


Network 'vlan-23-type-bridging' MAC address table:
total number of MAC addresses:    0
number of MAC addresses returned: 0
Destination Address  Address Type  VLAN ID  VXLAN ID  Destination Port  Age
-------------------  ------------  -------  --------  ----------------  ---

Normally there would be a list of MAC addresses learned from the two networks, but they have all timed out in this case. 😉

 

Manage and report on a Distributed Firewall using NSX Manager and ESXi CLI commands

The Distributed Firewall is inside the ESXi kernel, so the ESXi node knows about what policies are configured on the virtual machines the ESXi node hosts. You can learn about the policies set on a VM through the commandline of ESXi.

First, we need to find the UUID of the virtual machine called App01:

~ # summarize-dvfilter | grep App01
world 1764245 vmm0:App01 vcUuid:'50 03 e7 19 22 48 f7 64-41 9a c8 4b 6f 75 31 69'

Then we look for the filter name for that virtual machine UUID:

~ # vsipioctl getfilters

Filter Name              : nic-1764245-eth1-vmware-sfw.2
VM UUID                  : 50 03 e7 19 22 48 f7 64-41 9a c8 4b 6f 75 31 69
VNIC Index               : 1
Service Profile          : --NOT SET--

Filter Name              : nic-1764245-eth0-vmware-sfw.2
VM UUID                  : 50 03 e7 19 22 48 f7 64-41 9a c8 4b 6f 75 31 69
VNIC Index               : 0
Service Profile          : --NOT SET--

As you might notice, this App01 virtual machine has two vNICs. That is why it has two policies attached to it.

After getting the filter name, you can look up the rules for that filter:

~ # vsipioctl getrules -f nic-1764245-eth0-vmware-sfw.2
ruleset domain-c7 {
  # Filter rules   
  rule 1011 at 1 inout protocol any from addrset ip-securitygroup-15 to any drop;   
  rule 1006 at 2 inout protocol any from addrset ip-securitygroup-15 to any drop;   
  rule 1010 at 3 inout protocol tcp from addrset ip-securitygroup-12 to addrset ip-securitygroup-13 port 5672 accept;   
  rule 1009 at 4 inout protocol tcp from addrset src1009 to addrset ip-securitygroup-14 port 3306 accept;   
  rule 1008 at 5 inout protocol tcp from any to addrset ip-securitygroup-12 port 443 accept with log;   
  rule 1008 at 6 inout protocol tcp from any to addrset ip-securitygroup-12 port 80 accept with log;   
  rule 1008 at 7 inout protocol tcp from any to addrset ip-securitygroup-12 port 1234 accept with log;   
  rule 1004 at 8 inout protocol ipv6-icmp icmptype 135 from any to any accept;   
  rule 1004 at 9 inout protocol ipv6-icmp icmptype 136 from any to any accept;   
  rule 1007 at 10 inout protocol any from any to any accept;   
  rule 1003 at 11 inout protocol udp from any to any port 67 accept;   
  rule 1003 at 12 inout protocol udp from any to any port 68 accept;   
  rule 1002 at 13 inout protocol any from any to any accept;
}

ruleset domain-c7_L2 {   
  # Filter rules   
  rule 1001 at 1 inout ethertype any from any to any accept;
}

~ #

You can also look up the address lists that these rules are using for traffic policing:

~ # vsipioctl getaddrsets -f nic-1764245-eth0-vmware-sfw.2
addrset ip-securitygroup-12 {
}
addrset ip-securitygroup-13 {
}
addrset ip-securitygroup-14 {
}
addrset ip-securitygroup-15 {
}
addrset src1009 {
}
~ #

 

Manage and report on an Edge Services VPN-Plus device using NSX Edge and client OS CLI commands

Once the SSL-VPN feature on a NSX Edge has been configured, you can use the command line of the NSX Edge to report the configuration and the current usage of SSL-VPN users.

vShield-edge-6-0> show configuration sslvpn-plus
-----------------------------------------------------------------------
vShield Edge SSL VPN-Plus Config:
{   
  "sslvpn" : {   
    "enable" : true,   
    "webResources" : [],
    "users" : [
      { 
...snip...   
      }   
    ],   
    "serverSettings" : {   
      "certificateId" : null,   
      "vmSize" : 1,   
      "cipherList" : [   
        "RC4-MD5"   
      ],   
      "ips" : [   
        "10.192.123.153"   
      ],   
      "port" : 443,   
      "ccu" : 100     
    },  
...snip...   
  }
}

As you can see the configuration is presented in a JSON format and all settings are included. Handy for easy safe-keeping. On to some operational commands:

Checking to see whether the SSL VPN-Plus service is running.

vShield-edge-6-0> show service sslvpn-plus
-----------------------------------------------------------------------
vShield Edge SSL VPN-Plus Status:
SSL VPN-PLUS is running.

It is possible for the service to be configured through the GUI and possibly not running due to a malfunction in the service that runs on the NSX Edge. You’ll get a “connection refused” message if or when that happens. It’ll also return a “connection refused” when the SSL VPN-Plus service has not been configured through the GUI.

If the service is configured and is accepting users, you can view what open user sessions exist:

vShield-edge-6-0> show service sslvpn-plus  sessions
3                    martijn                   0 Hr. 16 Min. 24 Sec

And for the last example, possibly the most important one, we’re going to look at the current active SSL VPN tunnels on the NSX Edge:

vShield-edge-6-0> show service sslvpn-plus tunnels
Tunnel User    Authenticated  Tunnel Type   Os-Type W-bytes  R-bytes  Uptime(s) Idle-time Virtual-ip  : Client-ip(Port) Ref-count
 191    martijn    YES           PHAT        Win     0       0       1138      206       172.16.34.10 : 10.192.120.150(16090)                             0000000001

As you can see each tunnel will be attached to an user, have uptime and bandwidth counters and contains the virtual IP address that has been assigned to the user from the IP Pool you have configured.

 

Manage and report on Load Balancers using NSX Edge CLI commands

Similar with the SSL VPN-Plus feature of the NSX ESG, the configuration for load balancing has to be done from the GUI (or API), but information can be requested from the command line. Also just like the SSL VPN-Plus feature, you can request the running configuration:

vShield-edge-2-0> show configuration loadbalancer
-----------------------------------------------------------------------
vShield Edge Loadbalancer Config:
{   
  "monitorService" : {   
    "logging" : {   
      "enable" : false,   
      "logLevel" : "info"   
    },   
    "enable" : true,   
    "healthMonitors" : [
...snip...

All configured settings are in the JSON output presented to you. Handy for easy safe-keeping. Now for some operational command examples. Lets start with what is probably the most important one, getting the status of the virtual servers:

vShield-edge-2-0> show service loadbalancer virtual myVirtualServer
-----------------------------------------------------------------------
Loadbalancer VirtualServer Statistics:

VIRTUAL myVirtualServer
|  ADDRESS [10.192.123.154]:80
|  SESSION (cur, max, total) = (0, 0, 0)
|  RATE (cur, max, limit) = (0, 0, 0)
|  BYTES in = (0), out = (0)
   +->POOL Webfire-Pool   
|  LB METHOD ip-hash
   |  LB PROTOCOL L7
   |  Transparent disabled
   |  SESSION (cur, max, total) = (0, 0, 0)
   |  BYTES in = (0), out = (0)
      +->POOL MEMBER: Webfire-Pool/web1, STATUS: UP
      |  |  STATUS = UP, MONITOR STATUS = OK
      |  |  SESSION (cur, max, total) = (11, 11, 253)
      |  |  BYTES in = (2542), out = (2344)
      +->POOL MEMBER: Webfire-Pool/web2, STATUS: DOWN
      |  |  STATUS = UP, MONITOR STATUS = CRITICAL
      |  |  SESSION (cur, max, total) = (0, 0, 0)
      |  |  BYTES in = (0), out = (0)

The output is fairly technical, but it’s pretty readable. There is a tree for every virtual server, listing the IP address and other settings first. Then it goes into the server pool attached to the virtual server, displays the global settings of that pool and then drills into the real servers configured in that pool. All objects in the output have a status field, which shows you whether the service is online. As you can see from this example, my web1 server is working correctly and receiving incoming connections, while the web2 server is down and not receiving any connections.

You can also omit the virtual server name in the command to get an overview of all operational virtual servers.

If you’re just looking of the status of a specific server pool, you can use this command:

vShield-edge-2-0> show service loadbalancer pool [pool_name]

The output is the same as the virtual server output, minus the virtual server details.

If you’re looking for connection information, there are two commands you can turn to. One to get the active sessions located in the NSX Edge and one to get the sticky mapping table. This sticky mapping table contains the mapping between origin IP address and real server, if you have configured a sticky bit on a virtual server so that a visitor does not switch between real servers.

vShield-edge-2-0> show service loadbalancer session
vShield-edge-2-0> show service loadbalancer table

 

That’s it for me on the command line examples. As I mentioned before, this is surely not all and you should definitely go through the NSX Command Line Reference and go explore the CLI.

 



Share the wealth!