## Setting up your own Model 3 “keyfob” – using a IoT Button

Some time ago, I talked about my Tesla Model 3 “keyfob” which essentially uses a Amazon IoT button to call some of Tesla API’s and “talk” to the car. This for me, is cool as it allows my daughter to unlock, and lock the car at home. And of course it is a bit geeky, and allowing one to play with more things. 🙂

Since publishing this, I was surprised how many of you ping me asking on details on how they can did this for themselves. Given the level of interest, I thought I will document this and outline the steps here. I do have to warn you, that this would be a little long – it entails getting a IoT Button configured, and then the code deployed. Before you get started, and if you aren’t techy, I would recommend to go through the post completely, so you get a sense of what is needed.

At a high level, below are the steps that you need to go through to get this working. And this might seem cumbersome and a lot but it is not that difficult. Also if you prefer you can follow the official AWS documentation online here.

1. Create a AWS Login (if you have a existing Amazon.com login, you can use the same one if you prefer)
2. Order a IoT Button
3. Register the IoT Button in the AWS Registry (this is done via the AWS console)
4. Create (and activate) a device certificate
5. Create a IoT security policy
6. Attach the IoT security policy (from the previous step) to the device certificate created earlier
7. Attach the IoT security policy (now with the associated certificate) to the IoT button
8. Configure the IoT button
9. Deploy some code – this is done via a server-less function (also called a Lambda function) – this is the code that gets executed
10. Test and Deploy
11. Enjoy the Fob! 🙂

#### Step 1 – Get the IoT Button

Of course you need to get a IoT Button; I got the AWS IoT Button (2nd Generation) which is what I would recommend.

#### Step 2 – Login to AWS IoT Console

Open AWS home page and login with your amazon.com credentials. Of course if you don’t have a Amazon.com account, then you want to click in sign up on the top right corner, to get this started.

After I login, I see something similar to the screenshot below. Your exact view might differ a little.

I recommend to change the region to one closer to you. To do this, click on the region on the top right corner and choose a region that is physically closest to you. In the longer run this would help with latency issues between you clicking the button and the car responding. For example in my case, Oregon makes most sense.

Once you have a AWS account setup, login to the AWS IoT console or on the AWS page in the previous step, scroll down to IoT Core as shown in the screenshot below.

#### Step 3 – Register IoT Button

Next step would be to register your IoT button – which of course means you physically have the button with you. The best way to register is to follow the instructions here. I don’t see much sense in trying to replicate that here.

Note: If you are not very technical, or comfortable, it might be best to use either the “AWS IoT Button Dev” app which is available both on the Apple Store (for iOS) and Google play (for Android).

Once you have registered a button (it doesn’t matter what you call it) – it will show up similar to the screenshot below. I only have one device listed.

#### Step 4 – Create a Device Certificate

Next, we need to create and activate a certificate for the device. Without this, the button won’t work. The certificate (which is a X.509 certificate) protects the communication between the button and AWS.

For most people, the one-click certification creation that AWS has, is probably the way to go. To get to this, on the AWS IoT console, click on Secure and then choose Certificates on the left if not already selected as shown below. I already have a certificate that you can see in the screenshot below.

If you need to create a certificate, click on the Create button on the top right corner, and choose one of the options shown in the image below. In most cases you would want to use the One-click certificate option.

NOTE: Once you create a Certificate, you get three files (these are the keys) that you need to download and keep safe. The certificate itself can be downloaded anytime, but the private and the public keys CANNOT be retrieved again after you close this page. It is IMPORTANT that you download these and save them in a safe place.

Once you have these downloaded then click on Activate on the bottom. And you should see a different certificate number than what you are seeing here. And don’t worry I have long deleted what you are seeing on this screen. 🙂

You can also see these in the developer guide on AWS documentation.

#### Step 5 – Create a IoT Security Policy

Next step is go back to the AWS IoT Console page and click on Policies under Security. This is used to create a IoT policy that you will need to attach to the certificate. Once you have a policy created, then it will look something like the screenshot below.

To create a policy, click on Create (or you might be prompted automatically if you don’t have one). On the create screen, in the Name you can enter anything that you prefer. I would suggest naming this something that you can remember and differentiate if you will have more than one button. In my case I named it as the same thing as my device.

• In the policy statements for Action enter “iot:Connect” – without the quotes, but this is case sensitive so make sure you match is exactly.
• For the Resource ARN enter “*” (again without the quotes) as shown below.
• And finally for the effect, make sure “Allow” is checked.
• And click on Create at the bottom.

After this is created this you will see the policies listed as shown below. You can see the new one we just created with “WhateverNameYouWillRecognize“. You can also see these and more details on the developer documentation – Create a AWS IoT Policy.

#### Step 6 – Attach a IoT Policy

Next step is to attach the policy that is just created to the certificate created earlier. To do that, click on Secure and Certificates on the left, and then click on the three dots (called ellipses) on the top right of the Certificate you created earlier. From the new menu that you get, choose “Attach Policy” as shown below.

From the resulting menu, select the policy that you had created earlier and select Attach. Using a sensible name that you would recognize would be helpful. You can also see these details on the developer documentation.

#### Step 7 – Attach Certificate to IoT Device

Next step is to attach the certificate to the IoT device (or thing). A device must have a certificate, a private key and a root CA certificate to authenticate with AWS. Amazon also recommends to attach a device certificate to the device – this probably isn’t helpful right now, but might be in the future if you start playing with this more.

To do this, select the certificate under Security on the left, and same as the previous step, by click on the three dots on the top right corner, select “Attach thing”.

And from the next screen select the IoT button that you registered earlier, and select “Attach”.

#### Step 8 – Configure IoT Button

To validate that everything is setup correctly – the certificate needs to be associated with a policy, and a thing (the IoT button in our case). So on the Certificates menu on the left, select your certificate by clicking on it (not the three dots this time – but rather the name). You will see a new screen that shows the details of the certificate as shown below.

And on the new menu on the left, if you click on Policies you should see the policy you created, and the Things should have the IoT button you created earlier.

Once all of this is done the next step is to configure the device. You can see more detailed steps on this on the developer guide here.

• KEY TIP: The documentation doesn’t make it too obvious, but as part of configuring – the device (IoT Button) will become an access point that you will need to connect to and upload the certificates and key you created earlier. You cannot do this from a phone and it is best done from a desktop/laptop that has wifi network. Whilst these days all laptops will have a wifi network card, that isn’t necessarily true for desktops. So use a machine which has a wifi that you can temporarily connect to the access point that the IoT device creates.
• Note this is only needed for getting the device configured to authenticate for AWS, and get on your Wifi network; once that is done you don’t need to do this.
• Once you have configured the device as outlined (https://docs.aws.amazon.com/iot/latest/developerguide/configure-iot.html) then continue to the next step.

#### Step 9 – Deploy some code

At last we are starting to get the interesting part – a lot of what we were doing until now, was getting the button configured and ready.

Now that you have a IoT button configured and registered, the next step is to deploy some code. For this you need to setup a Lambda function using the AWS Lambda Console.

When you login, click on Create Function. On the Create function screen, choose the Blueprints option as shown below. You can see some of these in the developer documentation here.

#### Step 10 – Blueprint Search

On the Blueprints search box (which says Filters by tags), type in “button” (without quotes) and press enter. You should see an option called “iot-button-email” as shown below, select that and click configure on the bottom right corner.

#### Step 11 – Basic Information

On the next screen that says “Basic information”, enter the details as shown below. The names should be meaningful for you to remember. Roles can be reused across other areas, for now you can use a simple name something like “unlockCar” or “unlockCarSomeName” if you have more than one vehicle. The policy template should already be populated and you shouldn’t need to do anything else.

For the 2nd half – AWS IoT Trigger, select the IoT type as “IoT Button” and enter your device serial number as outlined in the screenshot below.

It won’t hurt to download these certificate and keys in addition to the ones created separately and save them in different folders. And for the Lambda function code, it doesn’t matter on the template code as we will be deleting it all. At this point that will be read-only and you won’t be able to modify anything – as shown in the screen shot below.

And finally scrolling down more, you will see the environment variables. Here is where you need to specify your Tesla credentials to it to be able to use create the token and call the Tesla API. For that you need the following two variables: TESLA_EMAIL and TESLA_PASS. These case sensitive so you need to enter them as is. And then finally click on Create function.

#### Step 12 – Code upload

Once you create a function, you will see something like the screen below. In my case the function is called “unlockSquirty” which is what you are seeing. This is divided in to two parts – when on the Configuration page. The top part is the designer that visually shows you what inputs are the triggers that execute the function, and then what it outputs to on the right hand side.  And below the designer is the editor where one can edit the code inline or upload a zip file with the code.

In the function code section, on the first drop down in the left (Code entry type) select upload a .zip file.

• Make sure the Runtime is Node.js 8.10
• Keep the Handler as the default.
• Double check your Environment variable contain TESLA_EMAIL, and TESLA_PASS.

And scroll down and in the Basic settings, change the timeout to 1 minute. We run thus asynchronously and adding a little buffer would be better. You can leave all the other settings at their default. If your network might be iffy you can make this 2 mins.

#### Step 13 – Code Publish

Once you have entered all of this, click on Save on the top right corner and then publish new version. Finally once it is published you will be able to see the code show up as shown in the screenshot below.

Again, a single click will unlock the car, a double-click would lock it, and a long press (holding it for 2-3 seconds) would open the charge port door.

And here is the code:

 var tjs = require('teslajs');

exports.handler = (event, context, callback) =>
{
{
var token = JSON.stringify(result.authToken);
if (token)

var options =
{
authToken: result.authToken
};

tjs.vehicleAsync(options).done(function(vehicle)
{
console.log("Vehicle " + vehicle.vin + " is: " + vehicle.state);
var options =
{
authToken: result.authToken,
vehicleID: vehicle.id_s
};

if(event.clickType == "SINGLE")
{
console.log("Single click, attempting to UNLOCK");
tjs.doorUnlockAsync(options).done(function(unlockResult)
{
console.log("Doors are now UNLOCKED");
});
}
else if(event.clickType == "DOUBLE")
{
console.log("Double click, attempting to LOCK");
tjs.doorLockAsync(options).done(function(lockResults) {
console.log("Doors are now LOCKED");
});
}
else if(event.clickType == "LONG")
{
console.log("Long click, attempting to CHARGE PORT");
tjs.openChargePortAsync(options).done(function(openResult) {
console.log("Charge port is now OPEN");
});
}
});
});
};


## Tesla .ssq file?

Tonight, I was a large download by the car, and saw that it was a .ssq file? The file name is consistent with the firmware naming convention, but I am not sure on what it is. The file itself is 5.11 GB, and in my case its name starts with “NA”. I am guessing, this might be the maps its updating.

Below are a couple of screenshots showing this. I am trying to make sense of the binary file, but not making much headway.

Curious, anyone has any ideas?

## Neural Network – Cheat Sheet

Neural Networks, today, help in a great set of tasks, that until very recently wasn’t possible at all – be it from computer vision, to medical diagnosis, to speech translation and forms a key cornerstone to a lot of ‘magic’ that Machine Learning and AI offers today.

I did blog about Neural Network types (and MarI/O) sometime back; I surely cannot take credit for creating these three cheat sheets but they are awesome and hope you get to use and enjoy them too.

## Clearing out Windows 10 command prompt history

My command prompt history is quite long, and a lot over time is not essentially garbage. I was looking at a way to clean it out. Most of the solutions online I found were not correct – I don’t know if things changed over time, but the latest version of Windows I am on (Windows 10 Pro 1803), it did not work.

So, here are two ways that you can do this. One is using the registry editor (RegEdit), and the other is running a simple script that you can either copy and paste from below or you can download and run it.

If you are going to be using RegEdit, and living dangerously then Press WinKey + R and type “regedit” (without quotes) and press enter to get the Registry Editor going as shown below.

And on the new Windows navigate to the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU and delete that. You can right click on the key name and choose delete.

It is important to double check because if you miss it, or delete something else, there is no recovery. (Why do you think I was saying, you like to live dangerously). See the screenshot below.

NOTE: It is always recommended to backup the registry before doing this, so at least you could restore it back to the state. To backup select File -> Export.

A better way, and less dangerous would be to run the following script in a elevated command prompt (i.e. a Admin command prompt) which will do the same thing, but more safer. You can just copy the command from below and paste it. Or alternatively you can download this simple script and run it locally (also from a elevated command prompt).

reg delete "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU" /f

## Tesla debug/diagnostic screens

I don’t know how to get to debug / dev mode on a Tesla, but did come across this old post, on how someone was in a test drive, which did  have this mode.

Now this is quite old, so a lot has changed, but am impressed that a lot of the components and foundational architecture was setup. I am particularly impressed how each cell in the battery pack and report its state. The BMS that you see is the Battery Management System – that firmware is separate from the car’s firmware.

You can see more photos and geek out online here.

And of course if you really want to geek out, then check out su-tesla, where Hemera has really has gone to party. I don’t know how to do this, and I have a lot of respect for Hemera to do this – she has a lot of guts. Also not sure what the wife would think about it and kick me out. Maybe. 🙂

I am curious though, if those ‘custom’ Ethernet connectors are M12 connectors (PDF) which are quite standard in some industries. Even Amazon sells cables for them.

And finally, from a more (relatively) recent update, the AutoPilot has a tremendous amount of data. As reported here, and you can see on the video below, the volume of data is massive, and quite interesting. For example, what decides there are 4 virtual lanes? The car below is a US car (the country code 840 is a ISO 3166 code).

## Tesla voice command list

As I was trying to understand more on the capabilities of the car, and what options I can do. The voice recognition on the car is quite impressive, it does seem as good at understanding as Amazon’s Echo, at least in the early days of “Alexa” (but that is a different story for another time).

I was trying to understand what things can I control, or the options one has via the voice. I am still not used to it, and keep forgetting, that is a option especially when driving. As of the v8 firmware series, the following are the choices that work for voice. Credit to Ingineer for discovering the full list when hacking the car.

The options in English are listed below, and this is missing the “ho ho ho” Easter egg and also the “cancel navigation” command.

"voice_command_list" : [
"command_type" : "navigate",
"description" : "drive to",
"description" : "drive 2",
"description" : "dr to",
"description" : "dr 2",
"description" : "drive",
"description" : "dr",
"description" : "navigate to",
"description" : "navigate 2",
"description" : "navigate",
"description" : "where is",
"description" : "take me to",
"description" : "take me 2",
"description" : "take me",

"command_type" : "call",
"description" : "call",
"description" : "dial",
"description" : "phone",

"command_type" : "note",
"description" : "note",
"description" : "report",
"description" : "bug note",
"description" : "bug report",

"command_type" : "play",
"description" : "play",
"description" : "plays",
"description" : "listen to",
"description" : "listens to",
"description" : "listen 2",
"description" : "listens 2"
]

And if you are keen to know, these are stored as a json file internally, and the fill list here here:

{
"voice_command_list" : [
{
"command_type" : "navigate",
"description" : "drive to",
"command_regexp" : "^drive to\\b(.*)$" }, { "command_type" : "navigate", "description" : "drive 2", "command_regexp" : "^drive 2\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "dr to",
"command_regexp" : "^dr to\\b(.*)$" }, { "command_type" : "navigate", "description" : "dr 2", "command_regexp" : "^dr 2\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "drive",
"command_regexp" : "^drive\\b(.*)$" }, { "command_type" : "navigate", "description" : "dr", "command_regexp" : "^dr\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "navigate to",
"command_regexp" : "^navigate to\\b(.*)$" }, { "command_type" : "navigate", "description" : "navigate 2", "command_regexp" : "^navigate 2\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "navigate",
"command_regexp" : "^navigate\\b(.*)$" }, { "command_type" : "navigate", "description" : "where is", "command_regexp" : "^where is\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "take me to",
"command_regexp" : "^take me to\\b(.*)$" }, { "command_type" : "navigate", "description" : "take me 2", "command_regexp" : "^take me 2\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "take me",
"command_regexp" : "^take me\\b(.*)$" }, { "command_type" : "navigate", "description" : "naviguer à", "command_regexp" : "^naviguer à\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "naviguer au",
"command_regexp" : "^naviguer au\\b(.*)$" }, { "command_type" : "navigate", "description" : "aller à", "command_regexp" : "^aller à\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "aller au",
"command_regexp" : "^aller au\\b(.*)$" }, { "command_type" : "navigate", "description" : "nach navigieren", "command_regexp" : "^nach\\b(.*)\\bnavigieren$"
},
{
"command_type" : "navigate",
"description" : "zur navigieren",
"command_regexp" : "^zur\\b(.*)\\bnavigieren$" }, { "command_type" : "navigate", "description" : "zu navigieren", "command_regexp" : "^zu\\b(.*)\\bnavigieren$"
},
{
"command_type" : "navigate",
"description" : "nach fahren",
"command_regexp" : "^nach\\b(.*)\\bfahren$" }, { "command_type" : "navigate", "description" : "zur fahren", "command_regexp" : "^zur\\b(.*)\\bfahren$"
},
{
"command_type" : "navigate",
"description" : "zu fahren",
"command_regexp" : "^zu\\b(.*)\\bfahren$" }, { "command_type" : "navigate", "description" : "wo ist", "command_regexp" : "^wo ist\\b(.*)$"
},
{
"command_type" : "navigate",
"description" : "navigiere nach",
"command_regexp" : "^navigiere nach\\b(.*)\\b$" }, { "command_type" : "navigate", "description" : "navigiere zu", "command_regexp" : "^navigiere zu\\b(.*)\\b$"
},
{
"command_type" : "navigate",
"description" : "导航到",
"command_regexp" : "^导航到(.*)$" }, { "command_type" : "navigate", "description" : "在哪", "command_regexp" : "^(.*)在哪$"
},
{
"command_type" : "navigate",
"description" : "开车到",
"command_regexp" : "^开车到(.*)$" }, { "command_type" : "navigate", "description" : "导航去", "command_regexp" : "^导航去(.*)$"
},
{
"command_type" : "navigate",
"description" : "導航去",
"command_regexp" : "^導航去(.*)$" }, { "command_type" : "navigate", "description" : "導航到", "command_regexp" : "^導航到(.*)$"
},
{
"command_type" : "navigate",
"description" : "帶我去",
"command_regexp" : "^帶我去(.*)$" }, { "command_type" : "navigate", "description" : "帶我到", "command_regexp" : "^帶我到(.*)$"
},
{
"command_type" : "navigate",
"description" : "去",
"command_regexp" : "^去(.*)$" }, { "command_type" : "navigate", "description" : "到", "command_regexp" : "^到(.*)$"
},
{
"command_type" : "call",
"description" : "call",
"command_regexp" : "^call\\b(.*)$" }, { "command_type" : "call", "description" : "dial", "command_regexp" : "^dial\\b(.*)$"
},
{
"command_type" : "call",
"description" : "phone",
"command_regexp" : "^phone\\b(.*)$" }, { "command_type" : "call", "description" : "appeler", "command_regexp" : "^appeler\\b(.*)$"
},
{
"command_type" : "call",
"description" : "composer",
"command_regexp" : "^composer\\b(.*)$" }, { "command_type" : "call", "description" : "wählen", "command_regexp" : "^(.*)\\bwählen$"
},
{
"command_type" : "call",
"description" : "anrufen",
"command_regexp" : "^(.*)\\banrufen$" }, { "command_type" : "call", "description" : "wähle", "command_regexp" : "^wählen\\b(.*)$"
},
{
"command_type" : "call",
"description" : "ruf an",
"command_regexp" : "^ruf\\b(.*)\\ban$" }, { "command_type" : "call", "description" : "rufe an", "command_regexp" : "^rufe\\b(.*)\\ban$"
},
{
"command_type" : "call",
"description" : "打电话给",
"command_regexp" : "^打电话给(.*)$" }, { "command_type" : "call", "description" : "打电话给", "command_regexp" : "^给(.*)打电话$"
},
{
"command_type" : "call",
"description" : "拨打",
"command_regexp" : "^拨打(.*)$" }, { "command_type" : "call", "description" : "打给", "command_regexp" : "^打给(.*)$"
},
{
"command_type" : "call",
"description" : "打電話俾",
"command_regexp" : "^打電話俾(.*)$" }, { "command_type" : "call", "description" : "打俾", "command_regexp" : "^打俾(.*)$"
},
{
"command_type" : "call",
"description" : "打電話去",
"command_regexp" : "^打電話去(.*)$" }, { "command_type" : "call", "description" : "打去", "command_regexp" : "^打去(.*)$"
},
{
"command_type" : "call",
"description" : "打電話比",
"command_regexp" : "^打電話比(.*)$" }, { "command_type" : "call", "description" : "打比", "command_regexp" : "^打比(.*)$"
},
{
"command_type" : "note",
"description" : "note",
"command_regexp" : "^note\\b(.*)$" }, { "command_type" : "note", "description" : "report", "command_regexp" : "^report\\b(.*)$"
},
{
"command_type" : "note",
"description" : "bug note",
"command_regexp" : "^bug note\\b(.*)$" }, { "command_type" : "note", "description" : "bug report", "command_regexp" : "^bug report\\b(.*)$"
},
{
"command_type" : "play",
"description" : "play",
"command_regexp" : "^play\\b(.*)$" }, { "command_type" : "play", "description" : "plays", "command_regexp" : "^plays\\b(.*)$"
},
{
"command_type" : "play",
"description" : "listen to",
"command_regexp" : "^listen to\\b(.*)$" }, { "command_type" : "play", "description" : "listens to", "command_regexp" : "^listens to\\b(.*)$"
},
{
"command_type" : "play",
"description" : "listen 2",
"command_regexp" : "^listen 2\\b(.*)$" }, { "command_type" : "play", "description" : "listens 2", "command_regexp" : "^listens 2\\b(.*)$"
},
{
"command_type" : "play",
"description" : "écouter",
"command_regexp" : "^écouter\\b(.*)$" }, { "command_type" : "play", "description" : "jouer", "command_regexp" : "^jouer\\b(.*)$"
},
{
"command_type" : "play",
"description" : "spielen",
"command_regexp" : "^(.*)\\bspielen$" }, { "command_type" : "play", "description" : "hören", "command_regexp" : "^(.*)\\bhören$"
},
{
"command_type" : "play",
"description" : "abspielen",
"command_regexp" : "^(.*)\\babspielen$" }, { "command_type" : "play", "description" : "abhören", "command_regexp" : "^(.*)\\babhören$"
},
{
"command_type" : "play",
"description" : "spiele",
"command_regexp" : "^spiele\\b(.*)$" }, { "command_type" : "play", "description" : "spiel", "command_regexp" : "^spiel\\b(.*)$"
},
{
"command_type" : "play",
"description" : "播放",
"command_regexp" : "^播放(.*)$" }, { "command_type" : "play", "description" : "收听", "command_regexp" : "^收听(.*)$"
},
{
"command_type" : "play",
"description" : "我想聽",
"command_regexp" : "^我想聽(.*)\$"
}
]
}

Whilst the #NLP engine working on this is quite good, and impressive, I am hopeful that there will be more options added. Elon did share that is something they are working on, and it might be part of the updated v9 release coming out in the next few weeks.

## How many lines of code does it take?

Often once hears are Lines of Code (LoC) as a metric. And for you to get a sense of what it means, below is a info-graphic that outlines some popular products, and services and the LoC that takes. Always interesting to get perspective – either appreciate some home grown system you are managing, or worried about a stinking pile you are going to inherit or build. 🙂

## Generating Tesla authentication token – cURL script

I did write a simple Windows (desktop) app called TeslaTokenGenerator, for those who wanted to create authentication tokens for their Tesla, and use with 3rd party apps/data loggers.

TeslaTokenGenerator can also create a cURL script for you to use, if you prefer not wanting to install anything. It is easy to find this online, but some of you have pinged me to get more details on this. So, I have the script below that you can use. Once you copy this, you will need to update your Tesla account login details (email and password) and run it in a console (command line) and it will all the same API’s to create the token, which then you can save.

curl -X POST -H "Cache-Control: no-cache" -H "Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW" -F "grant_type=password" -F "client_id=81527cff06843c8634fdc09e8ac0abefb46ac849f38fe1e431c2ef2106796384" -F "client_secret=c7257eb71a564034f9419ee651c7d0e5f7aa6bfbd18bafb5c5c033b093bb2fa3" -F "email=YOUR-TESLA-LOGIN-EMAIL@SOMEWHERE.COM" -F "password=YOUR-TESLA-ACCOUNT-PASSWORD" "https://owner-api.teslamotors.com/oauth/token"

You can see the screenshots of this below too – one in Windows, and another in Linux (well Bash on Windows, but it is real Linux).

## My Tesla Model 3 “Keyfob”

Inspired by a few folks on a few forums online, I took the liberty to extend their idea using a IoT Button, that acts as a simple “keyfob” for the Model 3.

The main goal was being to allow my daughter to lock and unlock the car at home. She is too young to have a phone, and without a more traditional fob, this gets a little annoying.

I extended the original idea, to understand the different presses (Single, Double, and Long press), and accordingly called the appropriate API to lock the car (single press – think of it as a single click), unlock on a double press, and open the charge port on a long press (when one presses and holds the button 2-3 secs).

For those who aren’t aware, the Amazon IoT buttons calls a Lambda function on AWS and plugging into that, one can extend this. The button needs to be connected, and online for this to work, and in my case, it is on the home wifi network.

## Windows Tesla Auth Token Generator

If you have a Tesla, and are using (or wanting to use) 3rd party tools or data loggers, the one think they of course need is to authenticate your details with Tesla. A simple, but insecure way is to use your Tesla credentials – and surprisingly many people just happily share and use this.

I wasn’t comfortable doing this – after-all, they have access to your account where you can control a lot of things. Also, there are a few online tools that can generate the auth token, but again I wasn’t comfortable, as I did not know what they saved, or what they did not. 🙂

So, I wrote a simple Windows app that can allow you to generate a auth token that you can save. The application itself is simple. You enter your Tesla credentials, click on Generate Token and can save the generated token.

To test, if the generated token is working – click on the Test Token button. If everything is working as expected, you will see a list of vehicles that is associated with your account.

If you prefer to use the cURL script, click on the Generate cURL, will generate this and copy it to your clipboard. And it works across operating systems as you can see below (Windows, and Linux), but should also work on Mac.

I do intent to open source this, so folks can have a look at the code, and the Tesla REST APIs. Until then you can download the setup from here.

## Neural network basics–Activation functions

Neural networks have a very interesting aspect – they can be viewed as a simple mathematical model that define a function. For a given function $f(x)$ which can take any input value of $x$, there will be some kind a neural network satisfying that function. This hypothesis was proven almost 20 years ago (“Approximation by Superpositions of a Sigmoidal Function” and “Multilayer feedforward networks are universal approximators”) and forms the basis of much of #AI and #ML use cases possible.

It is this aspect of neural networks that allow us to map any process and generate a corresponding function. Unlike a function in Computer Science, this function isn’t deterministic; instead is confidence score of an approximation (i.e. a probability). The more layers in a neural network, the better this approximation will be.

In a neural network, typically there is one input layer, one output layer, and one or more layers in the middle. To the external system, only the input layer (values of $x$), and the final output (output of the function $f(x)$) are visible, and the layers in the middle are not and essentially hidden.

Each layer contains nodes, which is modeled after how the neurons in the brain works. The output of each node gets propagated along to the next layer. This output is the defining character of the node, and activates the node to pass on its value to the next node; this is very similar to how a neuron in the brain fires and works passing on the signal to the next neuron.

To make this generalization of function $f(x)$ outlined above to hold, the that function needs to be continuous function. A continuous function is one where small changes to the input value $x$, creates small changes to the output of $f(x)$. If these outputs, are not small and the value jumps a lot then it is not continuous and it is difficult for the function to achieve the approximation required for them to be used in a neural network.

For a neural network to ‘learn’ – the network essentially has to use different weights and biases that has a corresponding change to the output, and possibly closer to the result we desire. Ideally small changes to these weights and biases correspond to small changes in the output of the function. But one isn’t sure, until we train and test the result, to see that small changes don’t have bigger shifts that drastically move away from the desired result. It isn’t uncommon to see that one aspect of the result has improved, but others have not and overall skewing the results.

In simple terms, an activation function is a node that attached to the output of a neural network, and maps the resulting value between 0 and 1. It is also used to connect two neural networks together.

An activation function can be linear, or non-linear. A linear isn’t terribly effective as its range is infinity. A non-linear with a finite range is more useful as it can be mapped as a curve; and then changes on this curve can be used to calculate the difference on the curve between two points.

There are many times of activation function, each either their strengths. In this post, we discuss the following six:

• Sigmoid
• Tanh
• ReLU
• Leaky ReLU
• ELU
• Maxout

1. Sigmoid function

A sigmoid function can map any of input values into a probability – i.e., a value between 0 and 1. A sigmoid function is typically shown using a sigma ($\sigma$). Some also call the ($\sigma$) a logistic function. For any given input value, $x$ the official definition of the sigmoid function is as follows:

$\sigma(x) \equiv \frac{1}{1+e^{-x}}$

If our inputs are $x_1, x_2,\ldots$, and their corresponding weights are $w_1, w_2,\ldots$, and a bias b, then the previous sigmoid definition is updated as follows:

$\frac{1}{1+\exp(-\sum_j w_j x_j-b)}$

When plotted, the sigmoid function, will look plotted looks like this curve below. When we use this, in a neural network, we essentially end up with a smoothed out function, unlike a binary function (also called a step function) – that is either 0, or 1.

For a given function, $f(x)$, as $x \rightarrow \infty$, $f(x)$ tends towards 1. And, as as $x \rightarrow -\infty$, $f(x)$ tends towards 0.

And this smoothness of $\sigma$ is what will create the small changes in the output that we desire – where small changes to the weights ($\Delta w_j$), and small changes to the bias ($\Delta b$) will produce a small changes to the output ($\Delta output$).

Fundamentally, changing these weights and biases, is what can give us either a step function, or small changes. We can show this as follows:

$\Delta \mbox{output} \approx \sum_j \frac{\partial \, \mbox{output}}{\partial w_j} \Delta w_j + \frac{\partial \, \mbox{output}}{\partial b} \Delta b$

One thing to be aware of is that the sigmoid function suffers from the vanishing gradient problem – the convergence between the various layers is very slow after a certain point – the neurons in previous layers don’t learn fast enough and are much slower than the neurons in later layers. Because of this, generally a sigmoid is avoided.

2. Tanh (hyperbolic tangent function)

Tanh, is a variant of the sigmoid function, but still quite similar – it is a rescaled version and ranges from –1 to 1, instead of 0 and 1. As a result, its optimization is easier and is preferred over the sigmoid function. The formula for tanh, is

$\tanh(x) \equiv \frac{e^x-e^{-z}}{e^X+e^{-x}}$

Using, this we can show that:

$\sigma(x) = \frac{1 + \tanh(x/2)}{2}$.

Tanh also suffers from the vanishing gradient problem. Both Tanh, and, Sigmoid are used in FNN (Feedforward neural network) – i.e. the information always moves forward and there isn’t any backprop.

3. Rectified Linear Unit (ReLU)

A rectified linear unity (ReLU) is the most popular activation function that is used these days.

$\sigma(x) = \begin{cases} x & x > 0\\ 0 & x \leq 0 \end{cases}$

ReLU’s are quite popular for a couple of reasons – one, from a computational perspective, these are more efficient and simpler to execute – there isn’t any exponential operations to perform. And two, these doesn’t suffer from the vanishing gradient problem.

The one limitation ReLU’s have, is that their output isn’t in the probability space (i.e. can be >1), and can’t be used in the output layer.

As a result, when we use ReLU’s, we have to use a softmax function in the output layer.  The output of a softmax function sums up to 1; and we can map the output as a probability distribution.

$\sum_j a^L_j = \frac{\sum_j e^{z^L_j}}{\sum_k e^{z^L_k}} = 1.$

Another issue that can affect ReLU’s is something called a dead neuron problem (also called a dying ReLU). This can happen, when in the training dataset, some features have a negative value. When the ReLU is applied, those negative values become zero (as per definition). If this happens at a large enough scale, the gradient will always be zero – and that node is never adjusted again (its bias. and, weights never get changed) – essentially making it dead! The solution? Use a variation of the ReLU called a Leaky ReLU.

4. Leaky ReLU

A Leaky ReLU will usually allow a small slope $\alpha$ on the negative side; i.e that the value isn’t changed to zero, but rather something like 0.01. You can probably see the ‘leak’ in the image below. This ‘leak’ helps increase the range and we never get into the dying ReLU issue.

5. Exponential Linear Unit (ELU)

Sometimes a ReLU isn’t fast enough – over time, a ReLU’s mean output isn’t zero and this positive mean can add a bias for the next layer in the neural network; all this bias adds up and can slow the learning.

Exponential Linear Unit (ELU) can address this, by using an exponential function, which ensure that the mean activation is closer to zero. What this means, is that for a positive value, an ELU acts more like a ReLU and for negative value it is bounded to -1 for $\alpha = 1$ – which puts the mean activation closer to zero.

$\sigma(x) = \begin{cases} x & x \geqslant 0\\ \alpha (e^x - 1) & x < 0\end{cases}$

When learning, this derivation of the slope is what is fed back (backprop) – so for this to be efficient, both the function and its derivative need to have a lower computation cost.

And finally, there is another various of that combines with ReLU and a Leaky ReLU called a Maxout function.

So, how do I pick one?

Choosing the ‘right’ activation function would of course depend on the data and problem at hand. My suggestion is to default to a ReLU as a starting step and remember ReLU’s are applied to hidden layers only. Use a simple dataset and see how that performs. If you see dead neurons, than use a leaky ReLU or Maxout instead. It won’t make sense to use Sigmoid or Tanh these days for deep learning models, but are useful for classifiers.

In summary, activation functions are a key aspect that fundamentally influence a neural network’s behavior and output. Having an appreciation and understanding on some of the functions, is key to any successful ML implementation.

## Netron – deep learning and machine learning model visualizer

I was looking at something else and happen to stumble across something called Netron, which is a model visualizer for #ML and #DeepLearning models. It is certainly much nicer than for anything else I have seen. The main thing that stood out for me, was that it supports ONNX , and a whole bunch of other formats (Keras, CoreML), TensorFlow (including Lite and JS), Caffe, Caffe2, and MXNet. How awesome is that?

This is essentially a cross platform PWA (progressive web app), essentially using Electron (JavaScript, HTML5, CSS) – which means it can run on most platforms and run-times from just a browser, Linux, Windows, etc. To debug it, best to use Visual Studio Code, along with the Chrome debugger extension.

Below is a couple of examples, of visualizing a ResNet-50 model – you can see both the start and the end of the visualization shown in the two images below to get a feel of things.

Start of ResNet-50 Model

End of ResNet-5o model

And some of the complex model seem very interesting. Here is an example of a TensorFlow Inception (v3) model.

And of course, this can get very complex (below is the same model, just zoomed out more).

I do think it is a brilliant, tool to help understand the flow of things, and what can one do to optimize, or fix. Also very helpful for folks who are just starting to learn and appreciate the nuances.

## Machine learning use-cases

Someone recently asked me, what are some of the use cases / examples of machine learning. Whilst, this might seem as an obvious aspect to some of us, it isn’t the case for many businesses and enterprises – despite that they uses elements of #ML (and #AI) in their daily life – as a consumer.

Whilst, the discussion gets more interesting based on the specific domain and the possibly use cases (of course understanding that some might not be sure f the use case – hence the question in the first place). But, this did get me thinking and wanted to share one of the images we use internally as part of our training that outcomes some of the use cases.

These are not 1:1 and many of them can be combined together to address various use cases – for example a #IoT device sending in a sensor data, that triggers a boundary condition (via a #RulesEngine), that in addition to executing one or more business rule, can trigger a alert to a human-in-the-loop (#AugmentingWorkforce) via a #DigitalAssistant (say #Cortana) to make her/him aware, or confirm some corrective action and the likes. The possibilities are endless – but each of these elements triggered by AI/ML and still narrow cases and need to be thought of in the holistic picture.

## Synthetic Sound

Trained a model to create a synthetic sound that sounds like me. This is after training it with about 30 sentences – which isn’t a lot.

To create a synthetic voice, you enters some text, using which is then “transcribed” using #AI and your synthetic voice is generated. In my case, at first I had said AI, which was generated also as “aeey” (you can have a listen here). So for the next one, changed the AI to Artificial Intelligence.

One does need to be mindful of #DigitalEthics, as this technology improves further. This is with only a very small sampling of data. Imagine what could happen, with public figures – where their recordings are available quite easily in the public domain. I am thinking the ‘digital twang’ is one of the signatures and ways to stamp this as a generated sound.