Seem like my training data for the car – perhaps a hint of #bias. đ

#GeekyJokes #ML #AIJokes

Skip to content
# Category: .ml

## #ML training data

## Neural network basics–Activation functions

## Netron – deep learning and machine learning model visualizer

## Machine learning use-cases

## My self-driving car

## Certificate error with git and Donkey Car

## AI photos–style transfer

Amit Bahree’s (useless?) insight!

ÎÜñ|‹ø//ñ [ÐëÞrëçã†ëð]

Neural networks have a very interesting aspect â they can be viewed as a simple mathematical model that define a function. For a given function which can take any input value of , there will be some kind a neural network satisfying that function. This hypothesis was proven almost 20 years ago (âApproximation by Superpositions of a Sigmoidal Functionâ and âMultilayer feedforward networks are universal approximatorsâ) and forms the basis of much of #AI andÂ #ML use cases possible.

It is this aspect of neural networks that allow us to map any process and generate a corresponding function. Unlike a function in Computer Science, this function isnât deterministic; instead is confidence score of an approximation (i.e. a probability).Â The more layers in a neural network, the better this approximation will be.

In a neural network, typically there is one input layer, one output layer, and one or more layers in the middle. To the external system, only the input layer (values of ), and the final output (output of the function ) are visible, and the layers in the middle are not and essentially hidden.

Each layer contains nodes, which is modeled after how the neurons in the brain works. The output of each node gets propagated along to the next layer. This output is the defining character of the node, and activates the node to pass on its value to the next node; this is very similar to how a neuron in the brain fires and works passing on the signal to the next neuron.

To make this generalization of function outlined above to hold, the that function needs to be continuous function. A continuous function is one where small changes to the input value , creates small changes to the output of . If these outputs, are not small and the value jumps a lot then it is not continuous and it is difficult for the function to achieve the approximation required for them to be used in a neural network.

For a neural network to âlearnâ â the network essentially has to use different weights and biases that has a corresponding change to the output, and possibly closer to the result we desire. Ideally small changes to these weights and biases correspond to small changes in the output of the function. But one isn’t sure, until we train and test the result, to see that small changes donât have bigger shifts that drastically move away from the desired result. It isn’t uncommon to see that one aspect of the result has improved, but others have not and overall skewing the results.

In simple terms, an activation function is a node that attached to the output of a neural network, and maps the resulting value between 0 and 1. It is also used to connect two neural networks together.

An activation function can be linear, or non-linear. A linear isnât terribly effective as its range is infinity. A non-linear with a finite range is more useful as it can be mapped as a curve; and then changes on this curve can be used to calculate the difference on the curve between two points.

There are many times of activation function, each either their strengths. In this post, we discuss the following six:

- Sigmoid
- Tanh
- ReLU
- Leaky ReLU
- ELU
- Maxout

**1. Sigmoid function**

A sigmoid function can map any of input values into a probability â i.e., a value between 0 and 1. A sigmoid function is typically shown using a sigma (). Some also call the () a logistic function. For any given input value, the official definition of the sigmoid function is as follows:

If our inputs are , and their corresponding weights are , and a bias **b**, then the previous sigmoid definition is updated as follows:

When plotted, the sigmoid function, will look plotted looks like this curve below. When we use this, in a neural network, we essentially end up with a smoothed out function, unlike a binary function (also called a step function) â that is either 0, or 1.

For a given function, , as , tends towards 1. And, as as , tends towards 0.

And this smoothness of is what will create the small changes in the output that we desire – where small changes to the weights (), and small changes to the bias () will produce a small changes to the output ().

Fundamentally, changing these weights and biases, is what can give us either a step function, or small changes.Â We can show this as follows:

One thing to be aware of is that the sigmoid function suffers from the vanishing gradient problem â the convergence between the various layers is very slow after a certain point â the neurons in previous layers donât learn fast enough and are much slower than the neurons in later layers. Because of this, generally a sigmoid is avoided.

**2. Tanh (hyperbolic tangent function)**

Tanh, is a variant of the sigmoid function, but still quite similar â it is a rescaled version and ranges from â1 to 1, instead of 0 and 1. As a result, its optimization is easier and is preferred over the sigmoid function. The formula for tanh, is

Using, this we can show that:

.

Tanh also suffers from the vanishing gradient problem. Both Tanh, and, Sigmoid are used in FNN (Feedforward neural network) â i.e. the information always moves forward and there isnât any backprop.

**3. Rectified Linear Unit (ReLU)**

A rectified linear unity (ReLU) is the most popular activation function that is used these days.

ReLU’s are quite popular for a couple of reasons â one, from a computational perspective, these are more efficient and simpler to execute – there isnât any exponential operations to perform. And two, these doesnât suffer from the vanishing gradient problem.

The one limitation ReLU’s have, is that their output isnât in the probability space (i.e. can be >1), and **can’t** be used in the output layer.

As a result, when we use ReLU’s, we have to use a softmax function in the output layer.Â The output of a softmax function sums up to 1; and we can map the output as a probability distribution.

Another issue that can affect ReLUâs is something called a dead neuron problem (also called a dying ReLU). This can happen, when in the training dataset, some features have a negative value. When the ReLU is applied, those negative values become zero (as per definition). If this happens at a large enough scale, the gradient will always be zero â and that node is never adjusted again (its bias. and, weights never get changed) – essentially making it dead! The solution? Use a variation of the ReLU called a Leaky ReLU.

**4. Leaky ReLU**

A Leaky ReLU will usually allow a small slope on the negative side; i.e that the value isnât changed to zero, but rather something like 0.01. You can probably see the âleakâ in the image below. This âleakâ helps increase the range and we never get into the dying ReLU issue.

**5. Exponential Linear Unit (ELU)**

Sometimes a ReLU isnât fast enough â over time, a ReLU’s mean output isn’t zero and this positive mean can add a bias for the next layer in the neural network; all this bias adds up and can slow the learning.

Exponential Linear Unit (ELU) can address this, by using an exponential function, which ensure that the mean activation is closer to zero. What this means, is that for a positive value, an ELU acts more like a ReLU and for negative value it is bounded to -1 for â which puts the mean activation closer to zero.

When learning, this derivation of the slope is what is fed back (backprop) â so for this to be efficient, both the function and its derivative need to have a lower computation cost.

And finally, there is another various of that combines with ReLU and a Leaky ReLU called a Maxout function.

**So, how do I pick one?**

Choosing the ârightâ activation function would of course depend on the data and problem at hand. My suggestion is to default to a ReLU as a starting step and remember ReLUâs are applied to hidden layers only. Use a simple dataset and see how that performs. If you see dead neurons, than use a leaky ReLU or Maxout instead. It wonât make sense to use Sigmoid or Tanh these days for deep learning models, but are useful for classifiers.

In summary, activation functions are a key aspect that fundamentally influence a neural network’s behavior and output. Having an appreciation and understanding on some of the functions, is key to any successful ML implementation.

I was looking at something else and happen to stumble across something called Netron, which is a model visualizer for #ML and #DeepLearning models. It is certainly much nicer than for anything else I have seen. The main thing that stood out for me, was that it supports ONNXÂ , and a whole bunch of other formats (Keras, CoreML), TensorFlow (including Lite and JS), Caffe, Caffe2, and MXNet. How awesome is that?

This is essentially a cross platform PWA (progressive web app), essentially using Electron (JavaScript, HTML5, CSS) â which means it can run on most platforms and run-times from just a browser, Linux, Windows, etc. To debug it, best to use Visual Studio Code, along with the Chrome debugger extension.

Below is a couple of examples, of visualizing a ResNet-50 model â you can see both the start and the end of the visualization shown in the two images below to get a feel of things.

And some of the complex model seem very interesting. Here is an example of a TensorFlow Inception (v3) model.

And of course, this can get very complex (below is the same model, just zoomed out more).

I do think it is a brilliant, tool to help understand the flow of things, and what can one do to optimize, or fix. Also very helpful for folks who are just starting to learn and appreciate the nuances.

The code is released under a MIT license and you can download it here.

Someone recently asked me, what are some of the use cases / examples of machine learning. Whilst, this might seem as an obvious aspect to some of us, it isnât the case for many businesses and enterprises â despite that they uses elements of #ML (and #AI) in their daily life â as a consumer.

Whilst, the discussion gets more interesting based on the specific domain and the possibly use cases (of course understanding that some might not be sure f the use case â hence the question in the first place). But, this did get me thinking and wanted to share one of the images we use internally as part of our training that outcomes some of the use cases.

These are not 1:1 and many of them can be combined together to address various use cases â for example a **#IoT** device sending in a sensor data, that triggers a boundary condition (via a **#RulesEngine**), that in addition to executing one or more business rule, can trigger a alert to a human-in-the-loop (#AugmentingWorkforce) via a **#DigitalAssistant** (say #Cortana) to make her/him aware, or confirm some corrective action and the likes. The possibilities are endless â but each of these elements triggered by AI/ML and still narrow cases and need to be thought of in the holistic picture.

Over the last few weeks, IÂ built a self-driving car – which essentially is a remote control Rx car that uses a raspberry pi running Python, TensorFlow implementing a end-to-end convolution neural network (CNN)

Of course other than beingÂ a bit geeky, I do think this is very cool to help understand and get into some of the basic constructs and mechanics around a number of things – web page design, hardware (maker things), and Artificial Intelligence principles.

There are two different models here – they do use the same ASC and controller that can be programmed. My 3D printer, did mess up a little (my supports were a little off) and which is why you see the top not clean.

The sensor and camera are quite basic, and there is provisions to add and do better over time. The Pi isn’t powerful enough to train the model – you need another machine for that (preferably a I7 core with a GPU). Once trained you can run the model on the Pi for inference.

This is the second car, which is a little different hardware, but the ESC to control the motor and actuators are the same.

The code is simple enough; below is an example of the camera (attached) to the Pi, saving the images it is seeing. Tubs is the location where the images are saved; these can then be transferred to another machine for training or inference.

import donkey as dk #initialize the vehicle V = dk.Vehicle() #add a camera part cam = dk.parts.PiCamera() V.add(cam, outputs=['image'], threaded=True) #add tub part to record images tub = dk.parts.Tub(path='~/d2/gettings_started', inputs=['image'], types=['image_array']) V.add(tub, inputs=inputs) #start the vehicle's drive loop V.start(max_loop_count=100)

Below you can see the car driving itself around the track, where it had to be trained first. The reason it is not driving perfectly is because during training (when I was manually driving it around), I crashed a few times and as a result the training data was messed up. Needed more time to clean that up and retrain it.

This is based on donkey car – which is an open source DIY for platform for small-scale self driving cars. I think it is also perfect to get into with those who have teenagers and a little older kids to get in and experiment. You can read up more details on how to go about building this, and the parts needed here.

If you were trying to pull the latest source code on your Raspberry Pi for donkeycar, and get the following error, then probably your clock is off (and I guess some nonce is failing). This can happen if your pi had been powered off for a while (as in my case), and it’s clock is off (clock drift is a real thing) :).

fatal: unable to access 'https://github.com/wroscoe/donkey/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

To fix this, the following commands works. It seems the Raspberry Pi 3, by default has NTP disabled and this would enable it. I also had to check the result status with the second command, and force it with the third one.

sudo timedatectl set-ntp True timedatectl status sudo timedatectl set-local-rtc true

And that should do it; you might need to reboot the pi just to get it back on and then you should be able to pull the code off git and deploy your autonomous car.

Can #AI make me look (more) presentable? The jury is out I think.

This is called style transfer, where the style/technique from a kind of painting (could be a photos too) is applied to an image, to create a new image. I took this using the built-in camera on my machine sitting at my desk and then applying the different kind of âstylesâ on it. Each of these styles are is a separate #deeplearning modelÂ that has learned how to apply the relevant style to a source image.

Specifically, this uses a Neural Network (#DeepLearning) model called VGG19, which is a 19 layer model running on TensorFlow. Of course you can export this to a ONNX model, that then can be used in most other run-times and libraries.

This is inspired from Cornell universities paper – Perceptual Losses for Real-Time Style Transfer and Super-Resolution. Below is a snapshot of the VGG code that.

def net(data_path, input_image): layers = ( 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2', 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'conv3_4', 'relu3_4', 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'conv4_3', 'relu4_3', 'conv4_4', 'relu4_4', 'pool4', 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3', 'conv5_4', 'relu5_4' ) data = scipy.io.loadmat(data_path) mean = data['normalization'][0][0][0] mean_pixel = np.mean(mean, axis=(0, 1)) weights = data['layers'][0] net = {} current = input_image for i, name in enumerate(layers): kind = name[:4] if kind == 'conv': kernels, bias = weights[i][0][0][0][0] # matconvnet: weights are [width, height, in_channels, out_channels] # tensorflow: weights are [height, width, in_channels, out_channels] kernels = np.transpose(kernels, (1, 0, 2, 3)) bias = bias.reshape(-1) current = _conv_layer(current, kernels, bias) elif kind == 'relu': current = tf.nn.relu(current) elif kind == 'pool': current = _pool_layer(current) net[name] = current assert len(net) == len(layers) return net def _conv_layer(input, weights, bias): conv = tf.nn.conv2d(input, tf.constant(weights), strides=(1, 1, 1, 1), padding='SAME') return tf.nn.bias_add(conv, bias) def _pool_layer(input): return tf.nn.max_pool(input, ksize=(1, 2, 2, 1), strides=(1, 2, 2, 1), padding='SAME')

If you have interest to play with this, you can download the code. Personally, I like Mosaic style the best.