Reboot Meraki APs

I have found myself several times over the last couple of months needing to reboot all of the APs within a Meraki network. Sometimes due to changes or sometimes due to them not responding for some reason. There really isn’t a clean way of going through and rebooting them aside from one a time within the console. I thought hey I can make this one better and do it via the API. So I went through and built this script to allow someone to put in the Org id and then it will pull back all of the networks that are in that Org and allow you to choose one to reboot all of the APs. It will ask should it go as fast as possible or would you like to put in a delay so that they all don’t go down at the same time. I’ve tested it a couple of times and everything works as its supposed to. As always I look forward to any comments or updates that I can put into the code to make it better.

https://github.com/undrwatr/MERAKI_AP_REBOOT

As usually my code isn’t fancy or special, just serviceable and able to get done what I need and save me some time and headaches.

Retrieve SSID info from Meraki Wireless Network

I support multiple Meraki Wireless networks and I wanted the ability to pull in pertinent data from the wireless networks that I support. This will allow me to pull in the data and then upload it into the wireless mapping programs and design software. This will also allow me to easily pull the channels, power settings, SSIDs, and other pertinent information. Here is the link to my GitHub site where I will be keeping the most up to date program as I fix things and work to improve it.

#!/usr/bin/env python

#import necessary modules
import cred
import requests
#Meraki site information
MERAKI_DASHBOARD = 'https://api.meraki.com'
HEADERS = {'X-Cisco-Meraki-API-Key': (cred.key), 'Content-Type': 'application/json'}
#NETWORK = input(str("What network are we looking at? "))
NETWORK = cred.network
NETWORK_URL = MERAKI_DASHBOARD + '/api/v0/networks/%s/devices' % NETWORK
NETWORK_GET = requests.get(NETWORK_URL, headers=HEADERS)
NETWORK_RESPONSE = NETWORK_GET.json()
#Create a function pull in the information
def WIRELESS_SETTINGS():
WIRELESS_SETTINGS_URL = MERAKI_DASHBOARD + '/api/v0/networks/%s/devices/%s/wireless/status' % (NETWORK, DEVICE['serial'])
WIRELESS_SETTINGS_GET = requests.get(WIRELESS_SETTINGS_URL, headers=HEADERS)
WIRELESS_SETTINGS_RESPONSE = WIRELESS_SETTINGS_GET.json()
for SSIDS in WIRELESS_SETTINGS_RESPONSE['basicServiceSets']:
if SSIDS['enabled'] == True:
print("SSID " + (SSIDS['ssidName']) + " BAND " + (SSIDS['band']) + " BSSID " + str(SSIDS['bssid']) + " Channel " + str(SSIDS['channel']) + " Power " + str(SSIDS['power']))
#Loops through the network and the devices to find all of the information.
for DEVICE in NETWORK_RESPONSE:
if DEVICE['model'] == "MR42":
print("AP " + DEVICE['name'])
WIRELESS_SETTINGS()

Passing the CWAP

Last weekend I was able to pass the CWAP exam on my second attempt. The first time through the exam I was thrown by some of the questions and didn’t have as good a grasp on some of the random things that were asked for. So I spent 2 weeks between exam takes watching all of the videos again and then going through all of my notes and flash cards that I had made. I also spent a good amount of time looking at packet traces and figuring out where all of the information and how Transmit Beamforming works with NDP. Overall I felt it was a good exam even though it took me two times to pass it. I definitely feel a lot better now about the information and that I was able to absorb it for the exam. So now I get to use the cool CWAP image for things:Cwap 200x200 My next exam will be the CWSP exam as I work my through to the CWNP.

Here are some of the resources that I used while for the CWAP:
CWNP – Video training
CWNP – Practice Tests
WIFItraining – CWAP Workbook
CWNP – Official CWAP Study Guide

Lessons Learned from our Nutanix Install

We received our 3 node Nutanix cluster last week. I did contract for professional services to help with the install, but given that professional services getting scheduled is still a week out I decided it was better if I started the process myself. I figured how difficult could it be to build out a Nutanix Cluster using AHV for the hypervisor. Really how difficult can it be to learn a new platform and everything that goes along with it. This is the list of things that I learned while doing the install myself and working through the problems that I encountered.

  • It helps to validate the port configuration on the back of the unit, the only support case I needed was to find out what ports were data, which were data/ipmi, and which were ipmi only. That wasn’t information that I was able to find on my own easily. So I opened a ticket with support and they gave me the exact information that I needed.

  • Make sure all of the ports are correctly configured on the network. Don’t just follow the QuickStart guide, because when it says you only need a data port in a dumb switch that isn’t necessarily correct or at least wasn’t in my case. I lost a few hours to the networking side, just figuring out that I needed the final trunked network connection to get everything talking correctly.
  • Put the IPMI onto it’s own network so that it is separate, and should be on a completely private network. In my case I hooked it up to our Opengear Serial Console with a 24 port switch that is a separate admin network.
  • Keep the Hypervisor and CVM on the same network and it will make life easier
  • Make sure to run the health check a few times and correct everything it says, just so things aren’t annoying. I had some that were info and just wanted them all cleaned up.
  • Realize that it will take time for all of the health alerts to cycle out of the console. So sometimes just waiting a day makes all the difference when trying to clear up the alerts. So if the health check comes back fine on the command line or console, it just might need some time.
  • Create a new local admin to resolve the annoying API alert that keeps popping up and that they don’t have you do anything with during the quick start install.
  • I’ll keep updating as I go and find new things, just as reminder for myself for the next install that we will most likely be doing in our Cold or Distribution Center.

    One Source of the truth for IP Addressing

    I’ve been working on my python programming skills and using scripts to configure all of my Meraki equipment through their API and cloud platform. It’s been a lot of work building the scripts and working through some of the limitations that are inherent in Meraki’s cloud and also in the way we manage our environment. I have over 1500 stores and use 2 /28 ranges for each store and I have been trying to use a formal IP management solution that would support it. Unfortunately after looking at Solarwinds, Bluecat, and other IPAMs I didn’t find what I wanted. I ended up going with a MS SQL Database that I can call via PYODBC and get the data based on the store#. This also allows me to put in other specialized information and I can basically build out what I need then call it from within my python programs.

    Obviously I would never be confused with a web designer of any sort. However what it is, is quick and easy to use. The asp page pulls up fast and things can be added and removed within a few minutes. The amount of time I was going to have to spend in a formal solution was just more time than I had. The downside of course is that if I leave someone else has to learn what I did and then take over the support of it. That will however be a problem for a different day, unless of course I find a great solution that allows me to treat everything separately and not as part of a huge network.

    Moving from Pymssql to Pyodbc

    I was in the process of updating one of my linux servers with Pymssql when I kept getting an error message about it trying to do the install:

    ERROR: Could not find a version that satisfies the requirement pymmssql (from versions: none)
    ERROR: No matching distribution found for pymmssql

    Turns out pymmsql had been deprecated and was no longer being supported. I use pymmsql to connect to my sql database in order to pull data for ip addresses and other network specific information to build my stores in the Meraki Portal.

    For the most part the conversion was fairly easy, I think the biggest issue I had was in getting the drivers installed for MS SQL for pyodbc. This site from MS was helpful in installing the ODBC drivers on Mac and Linux. Once I got the drivers installed I then had some issues with making https calls via my python programs. I had to reinstall python via pyenv reinstall. Once the version was reinstalled then it resolved the issues with the API calls and the requests module. Here is an example of the code I was running and the code that I am now running.

    PYMSSQL CODE:

    import pymssql as mdb

    sql_host = cred.sql_host
    sql_username = cred.sql_username
    sql_password = cred.sql_password
    sql_database = cred.sql_database
    store = str(input(“What store are we creating?: “))
    sql_connection = mdb.connect(sql_host,sql_username,sql_password,sql_database)
    cursor = sql_connection.cursor()
    cursor.execute(“select [VID2GW] from tblDSlip where [Store #] = (%s)”, (store))
    VLAN2GW = str(cursor.fetchone()[0])

    PYODBC CODE:

    import pyodbc
    store = str(input(“What store are we creating?: “))

    if len(store) == 2:
    sql_store = ’00’ + store
    elif len(store) == 3:
    sql_store = ‘0’ + store
    else:
    sql_store = store

    server = cred.sql_host
    database = cred.sql_database
    username = cred.sql_username
    password = cred.sql_password

    cnxn = pyodbc.connect(‘DRIVER={ODBC Driver 17 for SQL Server};SERVER=’+server+’;DATABASE=’+database+’;UID=’+username+’;PWD=’+ password)
    cursor = cnxn.cursor()
    cursor.execute(“select [VID2GW] from tblDSlip where [Store #] = ?”, sql_store)
    VLAN2GW = str(cursor.fetchone()[0])

     

    Planes of Fame January Living History Day – Focke-Wulf Fw 190

    The local Air Museum holds a living history day each month where they bring out the planes along with people that might have flown them or worked on them. It’s a great benefit living so close to the air museum that I can just drive over and in some cases shoot from my backyard. As I am trying to improve my airplane photography I took this as an opportunity to work on shooting at a slow shutter speed and working on my panning. Needless to say my panning needs some work so that I can get some sharp photographs.