Docker container running Ubuntu on Windows

Containers are all the rage right now and rightfully so – not only do they help abstract away some of the complexity and dependencies of your apps and solutions, they also make managing of environments, and, deployments much simpler. And the fact that you can do it in a consistent, and repeatable fashion is just icing on the cake.

As a simple example, with Docker, on Windows (as in my case), I can run a dockerized app, on a different OS than the host, which can also be interactive. 

The command below will spawn a container, pull down the image of Ubuntu and then run an interactive terminal, tying the terminal to the standard input. Of course in this example, this requires that you already have Docker installed (the Community Edition would be just fine to play around with).

docker run --interactive --tty ubuntu bash

Now, with Docker if you do get the following error (on Windows): “Error response from daemon: operating system on which parent image was created is not Windows.” as also shown below, the way to fix it is to switch on Experimental features.

Docker error when trying to run Ubuntu on Windows

To try and fix this, right click on the docker icon in the system tray, choose Settings, and from the setting screen, in the Daemon tab, enable experimental features as shown below.

And after enabling the experimental features, the docker daemon will restart. And post that, if you run the docker command again, it would work as expected:

  • It pulls down the image (which is used to run in the container)
  • Runs Ubuntu in an interactive session (this is because of the option I choose)
  • And all within my PowerShell console on Windows.

This is just the beginning, there of course is a lot more to it.  🙂

Ubuntu on Surface Book

I am writing this on a Microsoft Surface Book, running Ubuntu natively, and there isn’t any Windows option – I blew away, the Windows partition, and there isn’t any other OS on it.

Why, some of you might think? Well, why not. 🙂 For me the motive is two fold: one am a geek and love to hack what works and cannot work – how else will one learn? And two, explore and see which AI frameworks, tools, and runtimes works better on Linux natively

Well I must say, this experiment has been a pleasant surprise and much more successful that I originally thought of. Most of the things are working quite well on Surface with Ubuntu – including touch and pen (both seem like mouse clicks). As the screenshot below shows, Ubuntu is running quite nicely – including most of the features. There are a few things that quite don’t – I have them listed later in the post.

Ubuntu desktop

So much so, that Visual Studio code is running natively and whilst I haven’t had a chance to use it much (yet), that fact that it can even so much was something I wasn’t expecting without running some containers or VM’s or the likes.

Visual Studio code running on Ubuntu

So, how does one go about doing this? It is quite simple these days to be honest. Below are the steps I had followed. I do think the real magic is the hard work that JakeDay has done to get the kernel and firmware supported.

Disclaimer: My experience outlined here is related to the Surface Book – it can also run and be supported on other Surface devices, and the exact nature of what works or doesn’t work would be a little different.

  1. Hardware – Have a USB keyboard and mouse handy just in case; and  if you are on a Surface Pro or something with only one usb port, then a usb hub. And you of course would need a USB drive to boot Ubuntu off.
  2. Disable Secure boot – without this getting the bootloader sequence would be challenging. If you aren’t sure how, then check out the instructions here to disable secure boot.
  3. Delete / Shrink the windows partition –  If you don’t care about Windows and have a copy of the license somewhere to get back you might want to just delete this. If you do want to shrink it (say this is your primary machine and you want to get back at some point, then goto Disk Management in Windows and resize the partition – keep this to at least 50 GB.
  4. Ubuntu USB drive – if you don’t have one already, create a ubuntu bootable usb drive. You can get more instructions here. And if you are on Windows,  I would recommend using Rufus.
  5. Install UbuntuBoot off the usb drive you created, and before that make sure you have disabled secure boot. I would pick most of the default options for Ubuntu for now.
  6. Patched Kernel – Once you have ubuntu running, I would recommend installing the patched kernel and headers that allows for Surface support. Steps for these are outlined below and need to be execute in a terminal.
    • Install Dependencies: sudo apt install git curl wget sed
    • Clone the repo: git clone https://github.com/jakeday/linux-surface.git ~/linux-surface
    • Change working directory: cd ~/linux-surface
    • Run setup: sudo sh setup.sh
    • Reboot on the patched kernel

Change boot kernel: Finally, after you have rebooted, the odds of Ubuntu booting off the ‘right’ kernel is quite slim and best to manually pick this. You can of course use the grub, or what I find better – install the grub customizer, and then choose the correct option as shown below. Once picked and you had hit save, you also need to run the following in a terminal to make these persist: sudo update-grub

Grub Customizer

And that is all to it for getting the base install and customization running.

If you are super curious on what that setup script does, the code is below (also listed on github). What is interesting to see the various hardware models supported.

LX_BASE=""
LX_VERSION=""

if [ -r /etc/os-release ]; then
    . /etc/os-release
	if [ $ID = arch ]; then
		LX_BASE=$ID
    elif [ $ID = ubuntu ]; then
		LX_BASE=$ID
		LX_VERSION=$VERSION_ID
	elif [ ! -z "$UBUNTU_CODENAME" ] ; then
		LX_BASE="ubuntu"
		LX_VERSION=$VERSION_ID
    else
		LX_BASE=$ID
		LX_VERSION=$VERSION
    fi
else
    echo "Could not identify your distro. Please open script and run commands manually."
	exit
fi

SUR_MODEL="$(dmidecode | grep "Product Name" -m 1 | xargs | sed -e 's/Product Name: //g')"
SUR_SKU="$(dmidecode | grep "SKU Number" -m 1 | xargs | sed -e 's/SKU Number: //g')"

echo "\nRunning $LX_BASE version $LX_VERSION on a $SUR_MODEL.\n"

read -rp "Press enter if this is correct, or CTRL-C to cancel." cont;echo

echo "\nContinuing setup...\n"

echo "Coping the config files under root to where they belong...\n"
cp -Rb root/* /

echo "Making /lib/systemd/system-sleep/sleep executable...\n"
chmod a+x /lib/systemd/system-sleep/sleep

read -rp "Do you want to replace suspend with hibernate? (type yes or no) " usehibernate;echo

if [ "$usehibernate" = "yes" ]; then
	if [ "$LX_BASE" = "ubuntu" ] && [ 1 -eq "$(echo "${LX_VERSION} >= 17.10" | bc)" ]; then
		echo "Using Hibernate instead of Suspend...\n"
		ln -sfb /lib/systemd/system/hibernate.target /etc/systemd/system/suspend.target && sudo ln -sfb /lib/systemd/system/systemd-hibernate.service /etc/systemd/system/systemd-suspend.service
	else
		echo "Using Hibernate instead of Suspend...\n"
		ln -sfb /usr/lib/systemd/system/hibernate.target /etc/systemd/system/suspend.target && sudo ln -sfb /usr/lib/systemd/system/systemd-hibernate.service /etc/systemd/system/systemd-suspend.service
	fi
else
	echo "Not touching Suspend\n"
fi

read -rp "Do you want use the patched libwacom packages? (type yes or no) " uselibwacom;echo

if [ "$uselibwacom" = "yes" ]; then
	echo "Installing patched libwacom packages..."
		dpkg -i packages/libwacom/*.deb
		apt-mark hold libwacom
else
	echo "Not touching libwacom"
fi

if [ "$SUR_MODEL" = "Surface Pro 3" ]; then
	echo "\nInstalling i915 firmware for Surface Pro 3...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_bxt.zip -d /lib/firmware/i915/
fi

if [ "$SUR_MODEL" = "Surface Pro" ]; then
	echo "\nInstalling IPTS firmware for Surface Pro 2017...\n"
	mkdir -p /lib/firmware/intel/ipts
	unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/

	echo "\nInstalling i915 firmware for Surface Pro 2017...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/
fi

if [ "$SUR_MODEL" = "Surface Pro 4" ]; then
	echo "\nInstalling IPTS firmware for Surface Pro 4...\n"
	mkdir -p /lib/firmware/intel/ipts
	unzip -o firmware/ipts_firmware_v78.zip -d /lib/firmware/intel/ipts/

	echo "\nInstalling i915 firmware for Surface Pro 4...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/
fi

if [ "$SUR_MODEL" = "Surface Pro 2017" ]; then
	echo "\nInstalling IPTS firmware for Surface Pro 2017...\n"
	mkdir -p /lib/firmware/intel/ipts
	unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/

	echo "\nInstalling i915 firmware for Surface Pro 2017...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/
fi

if [ "$SUR_MODEL" = "Surface Pro 6" ]; then
	echo "\nInstalling IPTS firmware for Surface Pro 6...\n"
	mkdir -p /lib/firmware/intel/ipts
	unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/

	echo "\nInstalling i915 firmware for Surface Pro 6...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/
fi

if [ "$SUR_MODEL" = "Surface Laptop" ]; then
	echo "\nInstalling IPTS firmware for Surface Laptop...\n"
	mkdir -p /lib/firmware/intel/ipts
	unzip -o firmware/ipts_firmware_v79.zip -d /lib/firmware/intel/ipts/

	echo "\nInstalling i915 firmware for Surface Laptop...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/
fi

if [ "$SUR_MODEL" = "Surface Book" ]; then
	echo "\nInstalling IPTS firmware for Surface Book...\n"
	mkdir -p /lib/firmware/intel/ipts
	unzip -o firmware/ipts_firmware_v76.zip -d /lib/firmware/intel/ipts/

	echo "\nInstalling i915 firmware for Surface Book...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/
fi

if [ "$SUR_MODEL" = "Surface Book 2" ]; then
	echo "\nInstalling IPTS firmware for Surface Book 2...\n"
	mkdir -p /lib/firmware/intel/ipts
	if [ "$SUR_SKU" = "Surface_Book_1793" ]; then
		unzip -o firmware/ipts_firmware_v101.zip -d /lib/firmware/intel/ipts/
	else
		unzip -o firmware/ipts_firmware_v137.zip -d /lib/firmware/intel/ipts/
	fi

	echo "\nInstalling i915 firmware for Surface Book 2...\n"
	mkdir -p /lib/firmware/i915
	unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/

	echo "\nInstalling nvidia firmware for Surface Book 2...\n"
	mkdir -p /lib/firmware/nvidia/gp108
	unzip -o firmware/nvidia_firmware_gp108.zip -d /lib/firmware/nvidia/gp108/
fi

if [ "$SUR_MODEL" = "Surface Go" ]; then
	echo "\nInstalling ath10k firmware for Surface Go...\n"
	mkdir -p /lib/firmware/ath10k
	unzip -o firmware/ath10k_firmware.zip -d /lib/firmware/ath10k/
fi

echo "Installing marvell firmware...\n"
mkdir -p /lib/firmware/mrvl/
unzip -o firmware/mrvl_firmware.zip -d /lib/firmware/mrvl/

read -rp "Do you want to set your clock to local time instead of UTC? This fixes issues when dual booting with Windows. (type yes or no) " uselocaltime;echo

if [ "$uselocaltime" = "yes" ]; then
	echo "Setting clock to local time...\n"

	timedatectl set-local-rtc 1
	hwclock --systohc --localtime
else
	echo "Not setting clock"
fi

read -rp "Do you want this script to download and install the latest kernel for you? (type yes or no) " autoinstallkernel;echo

if [ "$autoinstallkernel" = "yes" ]; then
	echo "Downloading latest kernel...\n"

	urls=$(curl --silent "https://api.github.com/repos/jakeday/linux-surface/releases/latest" | grep '"browser_download_url":' | sed -E 's/.*"([^"]+)".*/\1/')

	resp=$(wget -P tmp $urls)

	echo "Installing latest kernel...\n"

	dpkg -i tmp/*.deb
	rm -rf tmp
else
	echo "Not downloading latest kernel"
fi

echo "\nAll done! Please reboot."

Lastly, below are the things not working for me – none of these are deal breakers but something to be aware of.

  • Cameras are not supported – either of the two.
  • Dedicated GPU (if you have one). This was a little bummed out as I got the dedicated GPU for some of the #MachineLearning experimentation, but then this whole thing is a different type of experimentation, so am OK.
  • Can control the volume using the speaker widget thing on the top right corner, but the volume buttons on top aren’t.
  • Sleep / Hibernation – It has some issues and for now I have sleep disabled but have hibernation setup.
  • Detaching the screen will immediately terminate everything and power off the machine (not a clean poweroff) – I am guessing it cannot transition between the two batteries of the base and the screen. However if already detached then it will work without any issues.

Happy hacking!

Roots of #AI

The naming is unfortunate when talking about #AI. There isn’t anything about intelligence – not as we humans know of it. If we can rewind back to the 50’s we can perhaps rename it to something like Computational Intelligence, which is more accurate. And although I have outlined the difference between some of the elements of AI in the past, I wanted to get back to what the intent was and how this area started.

Can machines think? Some say, the origins of #AI go back to Turing and started with his paper “Computing machinery and intelligence” (PDF) when it was published in 1950.Whilst, Turing might have planed the seed, it was a program called Logic Theorist created Allen Newell, Cliff Shaw, and Herbert Simon which was the first #ArtificialIntelligence program. Of course it wasn’t called #AI then.

That started back in 1956 when a Logic Theorist was presented at a conference in Dartmouth College called “Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI)” (PDF). The term “#AI” was coined at the conference.

Since then, AI has had a roller coaster of a ride over the decades – from colder than hell (I presume) winters, to hotter than lava with it being everywhere. As someone said, time will heal all wounds.

#AI Timeline

Today, many of us use #AI, #DeepLearning, and, #MachineLearning interchangeably. Over the course of last couple of years, I have learned to ignore that, but fundamentally the distinction is important.

AI, we would say is more computational intelligence – allowing computers to do tasks that would be difficult for humans to do, certainly at scale. And these tasks are accomplished using different mechanisms and techniques, using “intelligent agents”.

Machine learning is a subset of AI, where the program or algorithm can learn from previous outputs, and improve based on that data – hence the “learning” part. It is akin to it learning from experience, but isn’t the same thing as we humans can comprehend and understand. Some of us think, the program is rewriting itself, which technically isn’t an accurate description.

Deep Learning is a set of techniques and algorithms of machine learning that are inspired from how the neurals in our brain connect together and work. These set of techniques are also called Neural Networks, and essentially are nothing but type of machine learning

For any of this AI “magic” to work, the one thing it needs to feed on is data. Without data, none of this would be possible. This data is classified into two categories – features and labels.

  • Features – these are aspects of whatever we are interested in. For example if we are interested in vehicles features could be the colour, make, and, model of the vehicle.
  • Labels – these are buckets of categories we put the things we are interested in. Using the same vehicles examples, we can have labels such as SUV, Sedan, Sports Car, Trucks, etc. that categorize vehicles.

One key principle to remember when it comes to #AI – all the outcomes that are described are in the terms of probabilities and not absolutes. All it suggests is the likelihood of something to happen, and most things cannot be predicted with total certainty. And this fundamental aspect one should remember when making decisions.

There isn’t a universal definition of AI, which sometimes doesn’t help. Each has their own perception. I have gotten over it to come to their terms and ensure we are talking the same lingo and meaning. It doesn’t help to get academic about it. 🙂

For example taking three leading analysts (Gartner, IDC, and Forrester) definition of AI (outlined below) is a good indicator on how this can get confusing.

  • Gartner – At its core, AI is about solving business problems in novel ways. It stretches across any organization from innovation, R&D and IT to data science.
  • IDC defines cognitive/Artificial Intelligence (AI) systems as a set of technologies that use deep natural language processing and understanding to answer questions and provide recommendations and direction. IDC’s coverage of cognitive/AI systems examines:
    • Digital assistants
    • Automated advisors
    • Artificial intelligence, deep learning and machine learning
    • Automated recommendation systems
  • Forrester defines AI as a liberatory technology at its core, and businesses that integrate it will free workers to become more innovative, creative, and adaptive than ever before. But these technologies are still in early stages.

And the field is just exploding now – not just with new research around #DeepLearning or #MachineLearning, but also net new aspects from a business perspectives; things like:

  • Digital Ethics
  • Conversational AI
  • Democratization of AI
  • Data Engineering (OK, not new, but certainly key)
  • Model Management
  • RPA (or #IntelligentAutomation)
  • AI Strategy

It is a new and exciting world that spans multiple spectrum. Don’t try and drink from the fire-hose, but take it in slowly, appreciate the nuances and what one brings value and discuss in terms of outcomes.

Computer – a male or female?

So, both these arguments make sense. I can’t decide which one is accurate.

Patent – Systems and methods for organizing and presenting skill progressions

This has been a long time coming – our patent filed a about 4 years ago was finally awarded today by the USPTO. Some details below.

United States Patent 10,102,774
Bahree , et al. October 16, 2018
Systems and methods for organizing and presenting skill progression

In any organization, the skills collectively possessed by individuals of the organization can determine the capabilities of the organization as a whole. Previously, there was no centralized method or system for managing skills which are complex and wide-ranging. There was also no effective way for individuals to review skills they possess and to discover other skills which they can cross-train and leverage—either to enhance their existing roles and responsibilities, or possibly change skills and get involved with another area and thereby grow their career. The limited visualizations of skill sets offered to the individuals were static and non-interactive, which is not ideal.

When organizations grow and begin hiring new technical employees, this tremendous influx of new resources and talent makes the overall skill set of the organization increasingly difficult to comprehend. The challenge gets increasingly difficult over time. Further, to allow such companies to both retain and attract talent such companies want to ensure that they can provide a clear path for employees to manage their careers and talent growth effectively. Such companies are also challenged to be able to efficiently allocate technical resources, and to visualize technical areas in which their current employees are strong, and areas in which their current employees need further training (or new employees need to be recruited) to help the company compete in the marketplace.

This patent represents a subset of our work on cohesive systems, methods, and devices for presenting and managing interrelated sets of skills for a person. We used a map interface to represent a set of interrelated skills to a user, and which allows the user an opportunity to strategize regarding how best the related and advanced skills may be acquired to advance on a career path.

The convergence of Tech Trends – in the past Mobility, Big Data, and Cloud (and today #DataScience, #ModernEngineering, #AI, #ML, and #Cloud) helps the creation of modern skills management systems. The solution at the heart of the patent is help address this and we deem to have wide applicability across industry domains, industry sectors, and vertical industry segments.

Since filing the patent, and awarding today – elements of this we have adopted at Avanade and rolled it out globally to our workforce across 20 countries allowing them to help manage complex skills, advance career and help establish a 3D career path.

Update on Tesla .ssq files

Sometime back, I noticed the car downloaded a large file (5.1 GB) which was a .ssq file. I hadn’t heard of a ssq file, and was curious on what this was.

I researched a little and as it turns out, a .ssq file is a compressed file system which is often used in an embedded Linux system, where storage size might be a area of concern. This file-system is called SquashFS, and is usually used on a read-only mode.

SquashFS is interesting, as it lets one mount the file-system directly and is distributed as a kernel source patch – which makes it easy to daisy chain and use it other regular Linux tools.

SquashFS tools are useful to mount and create a SquashFS file-system. As shown below, I can mount the downloaded file, using unsquashfs.

unsquashfs to mount a SquashFS file-system

I think it is known that Tesla uses Valhalla for their maps and this file is the updated maps data. Valhalla, is a open source routing library which is using OpenStreetMap. Valhalla, also incorporates the traditional travelling salesman problem which is a non-deterministic polynomial problem.

When extracted and mounted, we see the following directory structure; each of these folders (and files therein) are in fact the tiles that make up the maps (next time in the car, when you zoom in or out or search of a non-cached location, notice carefully on how it is loading and you can just about make out the tiles – it is quick and easy to miss). And it is these tiles that is used for routing as part of the navigation. 

Tiled based routing is supposed to be beneficial – it uses less memory (the graph can be decomposed much easier, with a smaller set of it loaded in memory), cahce-able, easier to manage (update-able), etc. We can see a glimpse on how the routing and calculation happen on a tile basis below.

tiles based routing

When, extracted we see there are three levels of hierarchy (0, 1, and, 2). In the file-system these are shown as directories, but there is a method to the madness.

  • Level 0 – these contain edges pertaining to roads that are considered highway / freeway / motorway roads. These are stored as 4 degree tiles.
  • Level 1 – contains roads that are at a arterial level and are saved in 1 degree tiles.
  • Level 2 – these are local roads and are saved as 0.25 degree tiles.

For example, the world at Level 0 would look like what we are seeing in the image below. And Pennsylvania can be seen below that; Level 0 colored in light blue, Level 1 in light green, and finally Level 2 in light red (which might not be obvious with the translucency).

World Level 0 tiles
Pennsylvania Level 0, 1, and 2 tiles

So, to use this, one can use a few helper functions to get the exact tile to load and vice-versa. For example using the GPS coordinate of 41.413203, -73.623787 (which is just outside of Brewster, NY), loading Level 2 (via the get_title_2 function) would give us the structure of /2/000/756/425.gph using which we know which tile to load.

Helper function (in python) that help obtain levels, tile ids, tile lists, lat/long coordinates, etc. from an intersecting box.

valhalla_tiles = [{'level': 2, 'size': 0.25}, {'level': 1, 'size': 1.0}, {'level': 0, 'size': 4.0}]
LEVEL_BITS = 3
TILE_INDEX_BITS = 22
ID_INDEX_BITS = 21
LEVEL_MASK = (2**LEVEL_BITS) - 1
TILE_INDEX_MASK = (2**TILE_INDEX_BITS) - 1
ID_INDEX_MASK = (2**ID_INDEX_BITS) - 1
INVALID_ID = (ID_INDEX_MASK << (TILE_INDEX_BITS + LEVEL_BITS)) | (TILE_INDEX_MASK << LEVEL_BITS) | LEVEL_MASK

def get_tile_level(id):
  return id & LEVEL_MASK

def get_tile_index(id):
  return (id >> LEVEL_BITS) & TILE_INDEX_MASK

def get_index(id):
  return (id >> (LEVEL_BITS + TILE_INDEX_BITS)) & ID_INDEX_MASK

def tiles_for_bounding_box(left, bottom, right, top):
  #if this is crossing the anti meridian split it up and combine
  if left > right:
    east = tiles_for_bounding_box(left, bottom, 180.0, top)
    west = tiles_for_bounding_box(-180.0, bottom, right, top)
    return east + west
  #move these so we can compute percentages
  left += 180
  right += 180
  bottom += 90
  top += 90
  tiles = []
  #for each size of tile
  for tile_set in valhalla_tiles:
    #for each column
    for x in range(int(left/tile_set['size']), int(right/tile_set['size']) + 1):
      #for each row
      for y in range(int(bottom/tile_set['size']), int(top/tile_set['size']) + 1):
        #give back the level and the tile index
        tiles.append((tile_set['level'], int(y * (360.0/tile_set['size']) + x)))
  return tiles

def get_tile_id(tile_level, lat, lon):
  level = filter(lambda x: x['level'] == tile_level, valhalla_tiles)[0]
  width = int(360 / level['size'])
  return int((lat + 90) / level['size']) * width + int((lon + 180 ) / level['size'])

def get_ll(id):
  tile_level = get_tile_level(id)
  tile_index = get_tile_index(id)
  level = filter(lambda x: x['level'] == tile_level, valhalla_tiles)[0]
  width = int(360 / level['size'])
  height = int(180 / level['size'])
  return int(tile_index / width) * level['size'] - 90, (tile_index % width) * level['size'] - 180

Tesla has actually open-sourced their implementation of Valhalla, which is based on C++. This still seems like an active project, but parts of the code haven’t been updated for a while.

Whilst I haven’t tried to set this up myself, it seems quite simple. Below are the instructions to get this going on Ubuntu or Debian (I think Mac is also supported, but needs a little different dependency set).

#below are the dependencies needed
sudo add-apt-repository -y ppa:valhalla-core/valhalla
sudo apt-get update
sudo apt-get install -y autoconf automake make libtool pkg-config g++ gcc jq lcov protobuf-compiler vim-common libboost-all-dev libboost-all-dev libcurl4-openssl-dev libprime-server0.6.3-dev libprotobuf-dev prime-server0.6.3-bin
#if you plan to compile with data building support, see below for more info
sudo apt-get install -y libgeos-dev libgeos++-dev liblua5.2-dev libspatialite-dev libsqlite3-dev lua5.2
if [[ $(grep -cF xenial /etc/lsb-release) > 0 ]]; then sudo apt-get install -y libsqlite3-mod-spatialite; fi
#if you plan to compile with python bindings, see below for more info
sudo apt-get install -y python-all-dev

#install with the following
git submodule update --init --recursive
./autogen.sh
./configure
make test -j$(nproc)
sudo make install

There you have it – we know now what the .ssq files are and how they are used. Just need more time to get it going and play with it – perhaps another project for another time. 🙂

Tesla and Spotify

Something seems to be up, with the car tickling an endpoint for connectivity perhaps? Its only 663 bytes up and 222 bytes down. This is still on v8.1 (36.2)

Spotify traffic from Tesla
Spotify traffic from Tesla

Tesla v9 API endpoints

In case you haven’t been following the news, Tesla is in the process of releasing the new firmware beta. I think many folks online are super interested in new autopilot upgrades.

I reverse engineered the associated app and there are certainly a few new end points exposed, as outlined below. Need time to now figure out more details on this and what they entail. Also need time to see what changes in the existing code and json (data structure). 

Is it interesting to go noodle on this, and see the associated calls. This outlines all the products as of today

{
  "AUTHENTICATE": {
    "TYPE": "POST",
    "URI": "oauth/token",
    "AUTH": false
  },
  "REVOKE_AUTH_TOKEN": {
    "TYPE": "POST",
    "URI": "oauth/revoke",
    "AUTH": true
  },
  "PRODUCT_LIST": {
    "TYPE": "GET",
    "URI": "api/1/products",
    "AUTH": true
  },
  "VEHICLE_LIST": {
    "TYPE": "GET",
    "URI": "api/1/vehicles",
    "AUTH": true
  },
  "VEHICLE_SUMMARY": {
    "TYPE": "GET",
    "URI": "api/1/vehicles/{vehicle_id}",
    "AUTH": true
  },
  "VEHICLE_DATA": {
    "TYPE": "GET",
    "URI": "api/1/vehicles/{vehicle_id}/data",
    "AUTH": true
  },
  "WAKE_UP": {
    "TYPE": "POST",     
    "URI": "api/1/vehicles/{vehicle_id}/wake_up",
    "AUTH": true
  },
  "UNLOCK": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/door_unlock",
    "AUTH": true
  },
  "LOCK": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/door_lock",
    "AUTH": true
  },
  "HONK_HORN": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/honk_horn",
    "AUTH": true
  },
  "FLASH_LIGHTS": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/flash_lights",
    "AUTH": true
  },
  "CLIMATE_ON": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/auto_conditioning_start",
    "AUTH": true
  },
  "CLIMATE_OFF": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/auto_conditioning_stop",
    "AUTH": true
  },
  "CHANGE_CLIMATE_TEMPERATURE_SETTING": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/set_temps",
    "AUTH": true
  },
  "CHANGE_CHARGE_LIMIT": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/set_charge_limit",
    "AUTH": true
  },
  "CHANGE_SUNROOF_STATE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/sun_roof_control",
    "AUTH": true
  },
  "ACTUATE_TRUNK": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/actuate_trunk",
    "AUTH": true
  },
  "REMOTE_START": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/remote_start_drive",
    "AUTH": true
  },
  "CHARGE_PORT_DOOR_OPEN": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/charge_port_door_open",
    "AUTH": true
  },
  "CHARGE_PORT_DOOR_CLOSE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/charge_port_door_close",
    "AUTH": true
  },
  "START_CHARGE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/charge_start",
    "AUTH": true
  },
  "STOP_CHARGE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/charge_stop",
    "AUTH": true
  },
  "MEDIA_TOGGLE_PLAYBACK": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/media_toggle_playback",
    "AUTH": true
  },
  "MEDIA_NEXT_TRACK": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/media_next_track",
    "AUTH": true
  },
  "MEDIA_PREVIOUS_TRACK": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/media_prev_track",
    "AUTH": true
  },
  "MEDIA_NEXT_FAVORITE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/media_next_fav",
    "AUTH": true
  },
  "MEDIA_PREVIOUS_FAVORITE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/media_prev_fav",
    "AUTH": true
  },
  "MEDIA_VOLUME_UP": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/media_volume_up",
    "AUTH": true
  },
  "MEDIA_VOLUME_DOWN": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/media_volume_down",
    "AUTH": true
  },
  "SEND_LOG": {
    "TYPE": "POST",
    "URI": "api/1/logs",
    "AUTH": true
  },
  "RETRIEVE_NOTIFICATION_PREFERENCES": {
    "TYPE": "GET",
    "URI": "api/1/notification_preferences",
    "AUTH": true
  },
  "SEND_NOTIFICATION_PREFERENCES": {
    "TYPE": "POST",
    "URI": "api/1/notification_preferences",
    "AUTH": true
  },
  "RETRIEVE_NOTIFICATION_SUBSCRIPTION_PREFERENCES": {
    "TYPE": "GET",
    "URI": "api/1/vehicle_subscriptions",
    "AUTH": true
  },
  "SEND_NOTIFICATION_SUBSCRIPTION_PREFERENCES": {
    "TYPE": "POST",
    "URI": "api/1/vehicle_subscriptions",
    "AUTH": true
  },
  "DEACTIVATE_DEVICE_TOKEN": {
    "TYPE": "POST",
    "URI": "api/1/device/{device_token}/deactivate",
    "AUTH": true
  },
  "CALENDAR_SYNC": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/upcoming_calendar_entries",
    "AUTH": true
  },
  "SET_VALET_MODE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/set_valet_mode",
    "AUTH": true
  },
  "RESET_VALET_PIN": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/reset_valet_pin",
    "AUTH": true
  },
  "SPEED_LIMIT_ACTIVATE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_activate",
    "AUTH": true
  },
  "SPEED_LIMIT_DEACTIVATE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_deactivate",
    "AUTH": true
  },
  "SPEED_LIMIT_SET_LIMIT": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_set_limit",
    "AUTH": true
  },
  "SPEED_LIMIT_CLEAR_PIN": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_clear_pin",
    "AUTH": true
  },
  "SCHEDULE_SOFTWARE_UPDATE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/schedule_software_update",
    "AUTH": true
  },
  "CANCEL_SOFTWARE_UPDATE": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/cancel_software_update",
    "AUTH": true
  },
  "POWERWALL_ORDER_SESSION_DATA": {
    "TYPE": "GET",
    "URI": "api/1/users/powerwall_order_entry_data",
    "AUTH": true
  },
  "POWERWALL_ORDER_PAGE": {
    "TYPE": "GET",
    "URI": "powerwall_order_page",
    "AUTH": true,
    "CONTENT": "HTML"
  },
  "ONBOARDING_EXPERIENCE": {
    "TYPE": "GET",
    "URI": "api/1/users/onboarding_data",
    "AUTH": true
  },
  "ONBOARDING_EXPERIENCE_PAGE": {
    "TYPE": "GET",
    "URI": "onboarding_page",
    "AUTH": true,
    "CONTENT": "HTML"
  },
  "REFERRAL_DATA": {
    "TYPE": "GET",
    "URI": "api/1/users/referral_data",
    "AUTH": true
  },
  "REFERRAL_PAGE": {
    "TYPE": "GET",
    "URI": "referral_page",
    "AUTH": true,
    "CONTENT": "HTML"
  },
  "MESSAGE_CENTER_MESSAGE_LIST": {
    "TYPE": "GET",
    "URI": "api/1/messages",
    "AUTH": true
  },
  "MESSAGE_CENTER_MESSAGE": {
    "TYPE": "GET",
    "URI": "api/1/messages/{message_id}",
    "AUTH": true
  },
  "MESSAGE_CENTER_COUNTS": {
    "TYPE": "GET",
    "URI": "api/1/messages/count",
    "AUTH": true
  },
  "MESSAGE_CENTER_MESSAGE_ACTION_UPDATE": {
    "TYPE": "POST",
    "URI": "api/1/messages/{message_id}/actions",
    "AUTH": true
  },
  "MESSAGE_CENTER_CTA_PAGE": {
    "TYPE": "GET",
    "URI": "messages_cta_page",
    "AUTH": true,
    "CONTENT": "HTML"
  },
  "AUTH_COMMAND_TOKEN": {
    "TYPE": "POST",
    "URI": "api/1/users/command_token",
    "AUTH": true
  },
  "SEND_DEVICE_KEY": {
    "TYPE": "POST",
    "URI": "api/1/users/keys",
    "AUTH": true
  },
  "DIAGNOSTICS_ENTITLEMENTS": {
    "TYPE": "GET",
    "URI": "api/1/diagnostics",
    "AUTH": true
  },
  "SEND_DIAGNOSTICS": {
    "TYPE": "POST",
    "URI": "api/1/diagnostics",
    "AUTH": true
  },
  "BATTERY_SUMMARY": {
    "TYPE": "GET",
    "URI": "api/1/powerwalls/{battery_id}/status",
    "AUTH": true
  },
  "BATTERY_DATA": {
    "TYPE": "GET",
    "URI": "api/1/powerwalls/{battery_id}",
    "AUTH": true
  },
  "BATTERY_POWER_TIMESERIES_DATA": {
    "TYPE": "GET",
    "URI": "api/1/powerwalls/{battery_id}/powerhistory",
    "AUTH": true
  },
  "BATTERY_ENERGY_TIMESERIES_DATA": {
    "TYPE": "GET",
    "URI": "api/1/powerwalls/{battery_id}/energyhistory",
    "AUTH": true
  },
  "BATTERY_BACKUP_RESERVE": {
    "TYPE": "POST",
    "URI": "api/1/powerwalls/{battery_id}/backup",
    "AUTH": true
  },
  "BATTERY_SITE_NAME": {
    "TYPE": "POST",
    "URI": "api/1/powerwalls/{battery_id}/site_name",
    "AUTH": true
  },
  "BATTERY_OPERATION_MODE": {
    "TYPE": "POST",
    "URI": "api/1/powerwalls/{battery_id}/operation",
    "AUTH": true
  },
  "SITE_SUMMARY": {
    "TYPE": "GET",
    "URI": "api/1/energy_sites/{site_id}/status",
    "AUTH": true
  },
  "SITE_DATA": {
    "TYPE": "GET",
    "URI": "api/1/energy_sites/{site_id}/live_status",
    "AUTH": true
  },
  "SITE_CONFIG": {
    "TYPE": "GET",
    "URI": "api/1/energy_sites/{site_id}/site_info",
    "AUTH": true
  },
  "HISTORY_DATA": {
    "TYPE": "GET",
    "URI": "api/1/energy_sites/{site_id}/history",
    "AUTH": true
  },
  "BACKUP_RESERVE": {
    "TYPE": "POST",
    "URI": "api/1/energy_sites/{site_id}/backup",
    "AUTH": true
  },
  "SITE_NAME": {
    "TYPE": "POST",
    "URI": "api/1/energy_sites/{site_id}/site_name",
    "AUTH": true
  },
  "OPERATION_MODE": {
    "TYPE": "POST",
    "URI": "api/1/energy_sites/{site_id}/operation",
    "AUTH": true
  },
  "TIME_OF_USE_SETTINGS": {
    "TYPE": "POST",
    "URI": "api/1/energy_sites/{site_id}/time_of_use_settings",
    "AUTH": true
  },
  "STORM_MODE_SETTINGS": {
    "TYPE": "POST",
    "URI": "api/1/energy_sites/{site_id}/storm_mode",
    "AUTH": true
  },
  "SEND_NOTIFICATION_CONFIRMATION": {
    "TYPE": "POST",
    "URI": "api/1/notification_confirmations",
    "AUTH": true
  },
  "NAVIGATION_REQUEST": {
    "TYPE": "POST",
    "URI": "api/1/vehicles/{vehicle_id}/command/navigation_request",
    "AUTH": true
  }
}

Atom

Never trust an atom, they make up everything. 🤓

#GeekyJokes

#ML concepts – Regularization, a primer

Regularization is a fundamental concept in Machine Learning (#ML) and is generally used with activation functions. It is the key technique that help with overfitting.

Overfitting is when an algorithm or model ‘fits’ the training data too well – it seems to good to be true. Essentially overfitting is when a model being trained, learns the noise in the data instead of ignoring it. If we allow overfitting, then the network only uses (or is more heavily influenced) by a subset of the input (the larger peaks), and doesn’t factor in all the input. 

The worry there being that outside of the training data, it might not work as well for ‘real world’ data. For example the model represented by the green line in the image below (credit: Wikipedia), follows the sample data too closely and seems too good. On the other hand, the model represented by the black line, which is better.

Overfitting example
Overfitting

Regularization helps with overfitting (artificially) penalizing the weights in the neural network. These weights are represented as peaks, and this reduces the peaks in the data. This ensure that the higher weights (peaks) don’t overshadow the rest of the data, and hence getting it to overfit. This diffusion of the weight vectors is sometimes also called weight decay.

Although there are a few regularization techniques for preventing overfitting (outlined below), these days in Deep Learning, L1 and L2 regression techniques are more favored over the others. 

  • Cross validation: This is a method for finding the best hyper parameters for a model. E.g. in a gradient descent, this would be to figure out the stopping criteria. There are various ways to do this such as the holdout method, k-fold cross validation, leave-out cross validation, etc.
  • Step-wise regression: This method essentially is a serial step-by-step regression where one reduces the weakest variable. Step-wise regression essentially does multiple regression a number of times, each time removing the weakest correlated variable. At the end you are left with the variables that explain the distribution best. The only requirements are that the data is normally distributed, and that there is no correlation between the independent variables. 

  • L1 regularization: In this method, we modify the cost function by adding the sum of the absolute values of the weights as the penalty (in the cost function).  In L1 regularization the weights shrinks by a constant amount towards zero. L1 regularization is also called Lasso regression.

  • L2 regularization: In L2 regularization on the other hand, we re-scale the weight to a subset factor – it shrinks by an amount that is proportional to the weight (as outlined in the image below). This shrinking makes the weight smaller and is also sometimes called weight decay.  To get this shrinking proportional, we take a squared mean of the weights, instead of the sum.  At face value it might seem that the weight eventually get to zero, but that is not true; typically other terms cause the weights to increase. L2 regularization is also called Ridge regression.

  • Max-norm: This enforces a upper bound on the magnitude of the weight vector. The one area this helps is that a network cannot ‘explode’ when the learning rates gets very high, as it is bounded.  This is also called projected gradient descent.

  • Dropout: Is very simple, and efficient and is used in conjunction with one of the previous techniques. Essentially it adds a probably on the neuron to keep it active, or ‘dropout’ by setting it to zero. Dropout doesn’t modify the cost function; it modifies the network itself as shown in the image below.

  • Increase training data: Whilst one can artificially expand the training set theoretically possible, in reality won’t work in most cases, especially in more complex networks. And in some cases one might think also to artificially expand the dataset, typically it is not cost effective to get a representative dataset.
L1 Regularization
L2 Regularization
Dropout

Between L1 and L2 regularization, many say that L2 is preferred, but I think it depends on the problem statement. Say in a network, if a weight has a large magnitude, L2 regularization shrink the weight more than L1 and will better. Conversely, if the weight is small then L1 shrinks the weight more than L2 – and is better as it tends to concentrate the weight in fewer but more important connections in the network.

In closing, the key aspect to appreciate – the small weights (peaks) in a regularized network essentially means that as our input changes randomly (i.e. noise), it doesn’t have a huge impact to the network and its output. So this makes it difficult for the network to learn the noise and respond to that. Conversely, in an unregularized networks, that has higher weights (peaks), small random changes to those weights can have a larger impact to the behavior of the network and the information it carries.

Is this why my machine might be slow?

Wait. I have how many tabs open? I can’t count more than fingers I have, so not sure if this is accurate. Maybe time to reboot. Smile

image

PS – Yes, I can count using more than 10 (toes, remember?)

Setting up your own Model 3 “keyfob” – using a IoT Button

Some time ago, I talked about my Tesla Model 3 “keyfob” which essentially uses a Amazon IoT button to call some of Tesla API’s and “talk” to the car. This for me, is cool as it allows my daughter to unlock, and lock the car at home. And of course it is a bit geeky, and allowing one to play with more things. 🙂

Since publishing this, I was surprised how many of you ping me asking on details on how they can did this for themselves. Given the level of interest, I thought I will document this and outline the steps here. I do have to warn you, that this would be a little long – it entails getting a IoT Button configured, and then the code deployed. Before you get started, and if you aren’t techy, I would recommend to go through the post completely, so you get a sense of what is needed.

At a high level, below are the steps that you need to go through to get this working. And this might seem cumbersome and a lot but it is not that difficult. Also if you prefer you can follow the official AWS documentation online here.

  1. Create a AWS Login (if you have a existing Amazon.com login, you can use the same one if you prefer)
  2. Order a IoT Button
  3. Register the IoT Button in the AWS Registry (this is done via the AWS console)
  4. Create (and activate) a device certificate
  5. Create a IoT security policy
  6. Attach the IoT security policy (from the previous step) to the device certificate created earlier
  7. Attach the IoT security policy (now with the associated certificate) to the IoT button
  8. Configure the IoT button
  9. Deploy some code – this is done via a server-less function (also called a Lambda function) – this is the code that gets executed
  10. Test and Deploy
  11. Enjoy the Fob! 🙂

Step 1 – Get the IoT Button

Of course you need to get a IoT Button; I got the AWS IoT Button (2nd Generation) which is what I would recommend.

Step 2 – Login to AWS IoT Console

Open AWS home page and login with your amazon.com credentials. Of course if you don’t have a Amazon.com account, then you want to click in sign up on the top right corner, to get this started.

AWS Login

After I login, I see something similar to the screenshot below. Your exact view might differ a little.

AWS Console

I recommend to change the region to one closer to you. To do this, click on the region on the top right corner and choose a region that is physically closest to you. In the longer run this would help with latency issues between you clicking the button and the car responding. For example in my case, Oregon makes most sense.

AWS Region Selection

Once you have a AWS account setup, login to the AWS IoT console or on the AWS page in the previous step, scroll down to IoT Core as shown in the screenshot below.

AWS Console

Step 3 – Register IoT Button

Next step would be to register your IoT button – which of course means you physically have the button with you. The best way to register is to follow the instructions here. I don’t see much sense in trying to replicate that here.

Note: If you are not very technical, or comfortable, it might be best to use either the “AWS IoT Button Dev” app which is available both on the Apple Store (for iOS) and Google play (for Android).

Once you have registered a button (it doesn’t matter what you call it) – it will show up similar to the screenshot below. I only have one device listed.

List of IoT things

Step 4 – Create a Device Certificate

Next, we need to create and activate a certificate for the device. Without this, the button won’t work. The certificate (which is a X.509 certificate) protects the communication between the button and AWS.

For most people, the one-click certification creation that AWS has, is probably the way to go. To get to this, on the AWS IoT console, click on Secure and then choose Certificates on the left if not already selected as shown below. I already have a certificate that you can see in the screenshot below.

Certificates

If you need to create a certificate, click on the Create button on the top right corner, and choose one of the options shown in the image below. In most cases you would want to use the One-click certificate option.

Certificate creation options

NOTE: Once you create a Certificate, you get three files (these are the keys) that you need to download and keep safe. The certificate itself can be downloaded anytime, but the private and the public keys CANNOT be retrieved again after you close this page. It is IMPORTANT that you download these and save them in a safe place.

Certificate Keys

Once you have these downloaded then click on Activate on the bottom. And you should see a different certificate number than what you are seeing here. And don’t worry I have long deleted what you are seeing on this screen. 🙂

You can also see these in the developer guide on AWS documentation.

Step 5 – Create a IoT Security Policy

Next step is go back to the AWS IoT Console page and click on Policies under Security. This is used to create a IoT policy that you will need to attach to the certificate. Once you have a policy created, then it will look something like the screenshot below.

IoT Policies

To create a policy, click on Create (or you might be prompted automatically if you don’t have one). On the create screen, in the Name you can enter anything that you prefer. I would suggest naming this something that you can remember and differentiate if you will have more than one button. In my case I named it as the same thing as my device.

  • In the policy statements for Action enter “iot:Connect” – without the quotes, but this is case sensitive so make sure you match is exactly.
  • For the Resource ARN enter “*” (again without the quotes) as shown below.
  • And finally for the effect, make sure “Allow” is checked.
  • And click on Create at the bottom.
IoT Policy Creation

After this is created this you will see the policies listed as shown below. You can see the new one we just created with “WhateverNameYouWillRecognize“. You can also see these and more details on the developer documentation – Create a AWS IoT Policy.

IoT Policies

Step 6 – Attach a IoT Policy

Next step is to attach the policy that is just created to the certificate created earlier. To do that, click on Secure and Certificates on the left, and then click on the three dots (called ellipses) on the top right of the Certificate you created earlier. From the new menu that you get, choose “Attach Policy” as shown below.

Attach Policy to Certificate

From the resulting menu, select the policy that you had created earlier and select Attach. Using a sensible name that you would recognize would be helpful. You can also see these details on the developer documentation.

Attach Policy to Certificate

Step 7 – Attach Certificate to IoT Device

Next step is to attach the certificate to the IoT device (or thing). A device must have a certificate, a private key and a root CA certificate to authenticate with AWS. Amazon also recommends to attach a device certificate to the device – this probably isn’t helpful right now, but might be in the future if you start playing with this more.

To do this, select the certificate under Security on the left, and same as the previous step, by click on the three dots on the top right corner, select “Attach thing”.

Attach Certificate

And from the next screen select the IoT button that you registered earlier, and select “Attach”.

Attach Certificate

Step 8 – Configure IoT Button

To validate that everything is setup correctly – the certificate needs to be associated with a policy, and a thing (the IoT button in our case). So on the Certificates menu on the left, select your certificate by clicking on it (not the three dots this time – but rather the name). You will see a new screen that shows the details of the certificate as shown below.

Certificate Details

And on the new menu on the left, if you click on Policies you should see the policy you created, and the Things should have the IoT button you created earlier.

Once all of this is done the next step is to configure the device. You can see more detailed steps on this on the developer guide here.

  • KEY TIP: The documentation doesn’t make it too obvious, but as part of configuring – the device (IoT Button) will become an access point that you will need to connect to and upload the certificates and key you created earlier. You cannot do this from a phone and it is best done from a desktop/laptop that has wifi network. Whilst these days all laptops will have a wifi network card, that isn’t necessarily true for desktops. So use a machine which has a wifi that you can temporarily connect to the access point that the IoT device creates.
  • Note this is only needed for getting the device configured to authenticate for AWS, and get on your Wifi network; once that is done you don’t need to do this.
  • Once you have configured the device as outlined (https://docs.aws.amazon.com/iot/latest/developerguide/configure-iot.html) then continue to the next step.

Step 9 – Deploy some code

At last we are starting to get the interesting part – a lot of what we were doing until now, was getting the button configured and ready.

Now that you have a IoT button configured and registered, the next step is to deploy some code. For this you need to setup a Lambda function using the AWS Lambda Console.

When you login, click on Create Function. On the Create function screen, choose the Blueprints option as shown below. You can see some of these in the developer documentation here.

Create Function screen

Step 10 – Blueprint Search

On the Blueprints search box (which says Filters by tags), type in “button” (without quotes) and press enter. You should see an option called “iot-button-email” as shown below, select that and click configure on the bottom right corner.

IoT Button filter

Step 11 – Basic Information

On the next screen that says “Basic information”, enter the details as shown below. The names should be meaningful for you to remember. Roles can be reused across other areas, for now you can use a simple name something like “unlockCar” or “unlockCarSomeName” if you have more than one vehicle. The policy template should already be populated and you shouldn’t need to do anything else.

Function basic information

For the 2nd half – AWS IoT Trigger, select the IoT type as “IoT Button” and enter your device serial number as outlined in the screenshot below.

IoT Trigger

It won’t hurt to download these certificate and keys in addition to the ones created separately and save them in different folders. And for the Lambda function code, it doesn’t matter on the template code as we will be deleting it all. At this point that will be read-only and you won’t be able to modify anything – as shown in the screen shot below.

Lambda function

And finally scrolling down more, you will see the environment variables. Here is where you need to specify your Tesla credentials to it to be able to use create the token and call the Tesla API. For that you need the following two variables: TESLA_EMAIL and TESLA_PASS. These case sensitive so you need to enter them as is. And then finally click on Create function.

Environment Variables

Step 12 – Code upload

Once you create a function, you will see something like the screen below. In my case the function is called “unlockSquirty” which is what you are seeing. This is divided in to two parts – when on the Configuration page. The top part is the designer that visually shows you what inputs are the triggers that execute the function, and then what it outputs to on the right hand side.  And below the designer is the editor where one can edit the code inline or upload a zip file with the code.

In the function code section, on the first drop down in the left (Code entry type) select upload a .zip file.

And on the next screen upload the function package that you can download from here.

  • Make sure the Runtime is Node.js 8.10
  • Keep the Handler as the default.
  • Double check your Environment variable contain TESLA_EMAIL, and TESLA_PASS.

And scroll down and in the Basic settings, change the timeout to 1 minute. We run thus asynchronously and adding a little buffer would be better. You can leave all the other settings at their default. If your network might be iffy you can make this 2 mins.

Environment Settings

Step 13 – Code Publish

Once you have entered all of this, click on Save on the top right corner and then publish new version. Finally once it is published you will be able to see the code show up as shown in the screenshot below.

Again, a single click will unlock the car, a double-click would lock it, and a long press (holding it for 2-3 seconds) would open the charge port door.

And here is the code:

 var tjs = require('teslajs');

 var username = process.env.TESLA_EMAIL;
 var password = process.env.TESLA_PASS;

 exports.handler = (event, context, callback) => 
 {
  tjs.loginAsync(username, password).done(function(result) 
  {
   var token = JSON.stringify(result.authToken);
   if (token)
    console.log("Login Succesful!");

   var options = 
   {
    authToken: result.authToken
   };

   tjs.vehicleAsync(options).done(function(vehicle) 
   {
    console.log("Vehicle " + vehicle.vin + " is: " + vehicle.state);
    var options = 
    {
     authToken: result.authToken,
     vehicleID: vehicle.id_s
    };

    if(event.clickType == "SINGLE")
    {
     console.log("Single click, attempting to UNLOCK");
     tjs.doorUnlockAsync(options).done(function(unlockResult) 
     {
      console.log("Doors are now UNLOCKED");
     });
    }
    else if(event.clickType == "DOUBLE")
    {
     console.log("Double click, attempting to LOCK");
     tjs.doorLockAsync(options).done(function(lockResults) {
      console.log("Doors are now LOCKED");
     });              
    }
    else if(event.clickType == "LONG")
    {
     console.log("Long click, attempting to CHARGE PORT");
     tjs.openChargePortAsync(options).done(function(openResult) {
      console.log("Charge port is now OPEN");
     });              
    }    
   });
  });
 };

Tesla .ssq file?

Tonight, I was a large download by the car, and saw that it was a .ssq file. The file name is consistent with the firmware naming convention, but I am not sure on what it is. The file itself is 5.11 GB, and in my case its name starts with “NA”. I am guessing, this might be the maps its updating.

Below are a couple of screenshots showing this. I am trying to make sense of the binary file, but not making much headway.

Curious, anyone has any ideas?

Update: I found out what .ssq files are; read up more here.

Neural Network – Cheat Sheet

Neural Networks, today, help in a great set of tasks, that until very recently wasn’t possible at all – be it from computer vision, to medical diagnosis, to speech translation and forms a key cornerstone to a lot of ‘magic’ that Machine Learning and AI offers today.

I did blog about Neural Network types (and MarI/O) sometime back; I surely cannot take credit for creating these three cheat sheets but they are awesome and hope you get to use and enjoy them too.

Neural Network Graphs

Clearing out Windows 10 command prompt history

My command prompt history is quite long, and a lot over time is not essentially garbage. I was looking at a way to clean it out. Most of the solutions online I found were not correct – I don’t know if things changed over time, but the latest version of Windows I am on (Windows 10 Pro 1803), it did not work.

So, here are two ways that you can do this. One is using the registry editor (RegEdit), and the other is running a simple script that you can either copy and paste from below or you can download and run it.

If you are going to be using RegEdit, and living dangerously then Press WinKey + R and type “regedit” (without quotes) and press enter to get the Registry Editor going as shown below.

Run command to start Registry Editor

And on the new Windows navigate to the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU and delete that. You can right click on the key name and choose delete.

It is important to double check because if you miss it, or delete something else, there is no recovery. (Why do you think I was saying, you like to live dangerously). See the screenshot below.

NOTE: It is always recommended to backup the registry before doing this, so at least you could restore it back to the state. To backup select File -> Export.

A better way, and less dangerous would be to run the following script in a elevated command prompt (i.e. a Admin command prompt) which will do the same thing, but more safer. You can just copy the command from below and paste it. Or alternatively you can download this simple script and run it locally (also from a elevated command prompt).

reg delete "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU" /f

Tesla debug/diagnostic screens

I don’t know how to get to debug / dev mode on a Tesla, but did come across this old post, on how someone was in a test drive, which did  have this mode.

Now this is quite old, so a lot has changed, but am impressed that a lot of the components and foundational architecture was setup. I am particularly impressed how each cell in the battery pack and report its state. The BMS that you see is the Battery Management System – that firmware is separate from the car’s firmware.

Tesla diagnostic screen

You can see more photos and geek out online here.

And of course if you really want to geek out, then check out su-tesla, where Hemera has really has gone to party. I don’t know how to do this, and I have a lot of respect for Hemera to do this – she has a lot of guts. Also not sure what the wife would think about it and kick me out. Maybe. 🙂

I am curious though, if those ‘custom’ Ethernet connectors are M12 connectors (PDF) which are quite standard in some industries. Even Amazon sells cables for them.

And finally, from a more (relatively) recent update, the AutoPilot has a tremendous amount of data. As reported here, and you can see on the video below, the volume of data is massive, and quite interesting. For example, what decides there are 4 virtual lanes? The car below is a US car (the country code 840 is a ISO 3166 code).

Thought of the day

Beware of programmers that carry screwdrivers

– Unknown