Yumi's Blog

Grad-CAM with keras-vis

Screen Shot 2019-04-20 at 6.37.26 PM

Gradient Class Activation Map (Grad-CAM) for a particular category indicates the discriminative image regions used by the CNN to identify that category.

The goal of this blog is to:

  • understand concept of Grad-CAM
  • understand Grad-CAM is generalization of CAM
  • understand how to use it using keras-vis
  • implement it using Keras's backend functions.

Reference

Reference in this blog

To set up the same conda environment as mine, follow:

Visualization of deep learning classification model using keras-vis

Setup

In [1]:
import keras
import tensorflow as tf
import vis ## keras-vis
import matplotlib.pyplot as plt
import numpy as np
print("keras      {}".format(keras.__version__))
print("tensorflow {}".format(tf.__version__))
Using TensorFlow backend.
keras      2.2.2
tensorflow 1.10.0

Read in pre-trained model

For this exersize, I will use VGG16.

In [2]:
from keras.applications.vgg16 import VGG16, preprocess_input
model = VGG16(weights='imagenet')
model.summary()
for ilayer, layer in enumerate(model.layers):
    print("{:3.0f} {:10}".format(ilayer, layer.name))
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544 
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000   
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
  0 input_1   
  1 block1_conv1
  2 block1_conv2
  3 block1_pool
  4 block2_conv1
  5 block2_conv2
  6 block2_pool
  7 block3_conv1
  8 block3_conv2
  9 block3_conv3
 10 block3_pool
 11 block4_conv1
 12 block4_conv2
 13 block4_conv3
 14 block4_pool
 15 block5_conv1
 16 block5_conv2
 17 block5_conv3
 18 block5_pool
 19 flatten   
 20 fc1       
 21 fc2       
 22 predictions

Download a json file containing ImageNet class names.

wget "https://raw.githubusercontent.com/raghakot/keras-vis/master/resources/imagenet_class_index.json"

Read in the class index json file

In [3]:
import json
CLASS_INDEX = json.load(open("imagenet_class_index.json"))
classlabel  = []
for i_dict in range(len(CLASS_INDEX)):
    classlabel.append(CLASS_INDEX[str(i_dict)][1])
print("N of class={}".format(len(classlabel)))
N of class=1000

Let's read in an image that contains both dog and cat. Clearly this image would be very confusing for a model trained with ImageNet, which often has a single object per image.

The goal of this exersise is to understand why VGG16 model makes classification decision.

In [4]:
from keras.preprocessing.image import load_img, img_to_array
#_img = load_img("duck.jpg",target_size=(224,224))
_img = load_img("dog_and_cat.jpg",target_size=(224,224))
plt.imshow(_img)
plt.show()

Let's predict the object class of this image, and show the top 5 predicted classes.

Unfortunately, the top 2 predicted class does not make any sense. Why towel?? However, you see that other predicted classes are dogs and kind of makes sense:

Top 1 predicted class:     Pr(Class=redbone            [index=168])=0.360
Top 3 predicted class:     Pr(Class=bloodhound         [index=163])=0.076
Top 4 predicted class:     Pr(Class=basenji            [index=253])=0.042
Top 5 predicted class:     Pr(Class=golden_retriever   [index=207])=0.041
In [5]:
img               = img_to_array(_img)
img               = preprocess_input(img)
y_pred            = model.predict(img[np.newaxis,...])
class_idxs_sorted = np.argsort(y_pred.flatten())[::-1]
topNclass         = 5
for i, idx in enumerate(class_idxs_sorted[:topNclass]):
    print("Top {} predicted class:     Pr(Class={:18} [index={}])={:5.3f}".format(
          i + 1,classlabel[idx],idx,y_pred[0,idx]))
Top 1 predicted class:     Pr(Class=redbone            [index=168])=0.360
Top 2 predicted class:     Pr(Class=bath_towel         [index=434])=0.076
Top 3 predicted class:     Pr(Class=bloodhound         [index=163])=0.076
Top 4 predicted class:     Pr(Class=basenji            [index=253])=0.042
Top 5 predicted class:     Pr(Class=golden_retriever   [index=207])=0.041

Grad-CAM

vgg16_final_layers

Last Convolusional Layer of the CNN

Let $A^k \in \mathbb{R}^{u\textrm{ x } v}$ be the $k$th feature map ($k=1,\cdots,K$) from the last convolusional layer. The dimension of this feature map is height $u$ and width $v$. For example, in the case of VGG16, $u=14, v=14, K=512$. Grad-CAM utilize these $A^k$ to visualize the decision made by CNN.

Some observations

  • Convolutional feature map retain spatial information (which is lost in fully-connected layers)
  • Each kernel represents some visual patterns. For example, one kernel may capture a dog, another kernel may capture bird etc.
  • Each pixel of the feature map indicates whether the corresponding kernel's visual pattern exists in its receptive fields.
  • Last Convolusional Layer can be thought as the features of a classification model.

    $$ y^c = f(A^1,...,A^{512})$$

Idea

  • Visualization of the final feature map ($A^k$) will show the discriminative region of the image.
  • Simplest summary of all the $A^k,k=1,...,K$ would be its linear combinations with some weights.
  • Some feature maps would be more important to make a decision on one class than others, so weights should depend on the class of interest.

    $$L^c_{Grad-CAM} \sim = \sum_{k=1}^K \alpha_k^c A^k \in \mathbb{R}^{u\textrm{ x } v}$$

So the question is, what the weights $\alpha_k^c$ should be?

Calculating $\alpha_{k}^c$

The gradient of the $c$th class score with respect to feature maps $A^{k}$

$$\frac{d y^c}{d A^k_{i,j}}$$

measures linear effect of $(i,j)$th pixel point in the $k$th feature map on the $c$th class score. So averaging pooling of this gradient across $i$ and $j$ explains the effect of feature map $k$ on the $c$th class score.

Grad-CAM propose to use this averaged gradient score as a weights for feature map. $$ \alpha_{k}^c = \frac{1}{uv} \sum_{i=1}^u \sum_{j=1}^v \frac{d y^c}{d A^k_{i,j}} $$

Finally, $L^c_{Grad-CAM}$ is defined as: $$ L^c_{Grad-CAM} = ReLU\left( \sum_{k=1}^K \alpha_k^c A^k \right)\in \mathbb{R}^{u\textrm{ x } v} $$

ReLU is applied to the linear combination of maps

because we are only interested in the features that have a positive influence on the class of interest, i.e., pixels whose intensity should be increased in order to increase $y^c$.

Finally, we upsample the class activation map to the size of the input image to identify the image regions most relevant to the particular category.

Note: As in Saliency Map, the softmax activation of the final layer is replaced with linear. See discussion in previous blog Saliency Map with keras-vis.

Understand Grad-CAM in special case: Network with Global Average Pooling

framework

GoogLeNet or MobileNet belongs to this network group. The network largely consists of convolutional layers, and just before the final output layer, global average pooling is applied on the convolutional feature maps, and use those as features for a fully-connected layer that produces the desired output (categorial or otherwise).

Given this simple connectivity structure of this natwork, Grad-CAM has an easy interpretation.

The weights $\alpha_k^c$ are simply the weights of the final fully connected layers. See the proof below: mobilenet_final_layer

Now we understand the concept of Grad-CAM. Let's use it with keras-vis API. First we replace the final activation to linear.

In [6]:
from vis.utils import utils
# Utility to search for layer index by name. 
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'predictions')
# Swap softmax with linear
model.layers[layer_idx].activation = keras.activations.linear
model = utils.apply_modifications(model)
/Users/yumikondo/anaconda3/envs/explainableAI/lib/python3.5/site-packages/keras/engine/saving.py:269: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
  warnings.warn('No training configuration found in save file: '

Calculate Grad-CAM

In [7]:
from vis.visualization import visualize_cam
penultimate_layer_idx = utils.find_layer_idx(model, "block5_conv3") 
class_idx  = class_idxs_sorted[0]
seed_input = img
grad_top1  = visualize_cam(model, layer_idx, class_idx, seed_input, 
                           penultimate_layer_idx = penultimate_layer_idx,#None,
                           backprop_modifier     = None,
                           grad_modifier         = None)

Visualization

In [8]:
def plot_map(grads):
    fig, axes = plt.subplots(1,2,figsize=(14,5))
    axes[0].imshow(_img)
    axes[1].imshow(_img)
    i = axes[1].imshow(grads,cmap="jet",alpha=0.8)
    fig.colorbar(i)
    plt.suptitle("Pr(class={}) = {:5.2f}".format(
                      classlabel[class_idx],
                      y_pred[0,class_idx]))
plot_map(grad_top1)

Observations

  • model actually focus mostly on the dog rather than cat. Probablly because half of the ImageNet classes are dog-related.
  • the intensity of Grad-CAM for the towel distributes everywhere.
In [9]:
for class_idx in class_idxs_sorted[:topNclass]:
    grads  = visualize_cam(model,layer_idx,class_idx, seed_input,
                           penultimate_layer_idx = penultimate_layer_idx,
                           backprop_modifier     = None,
                           grad_modifier         = None)
    plot_map(grads)

Grad-CAM by hand

We know how the Grad-CAM works so let's implement it by hand.

In [10]:
import keras.backend as K
from scipy.ndimage.interpolation import zoom
## select class of interest
class_idx           = class_idxs_sorted[0]
## feature map from the final convolusional layer
final_fmap_index    = utils.find_layer_idx(model, 'block5_conv3')
penultimate_output  = model.layers[final_fmap_index].output

## define derivative d loss^c / d A^k,k =1,...,512
layer_input          = model.input
## This model must already use linear activation for the final layer
loss                 = model.layers[layer_idx].output[...,class_idx]
grad_wrt_fmap        = K.gradients(loss,penultimate_output)[0]

## create function that evaluate the gradient for a given input
# This function accept numpy array
grad_wrt_fmap_fn     = K.function([layer_input,K.learning_phase()],
                                  [penultimate_output,grad_wrt_fmap])

## evaluate the derivative_fn
fmap_eval, grad_wrt_fmap_eval = grad_wrt_fmap_fn([img[np.newaxis,...],0])

# For numerical stability. Very small grad values along with small penultimate_output_value can cause
# w * penultimate_output_value to zero out, even for reasonable fp precision of float32.
grad_wrt_fmap_eval /= (np.max(grad_wrt_fmap_eval) + K.epsilon())

print(grad_wrt_fmap_eval.shape)
alpha_k_c           = grad_wrt_fmap_eval.mean(axis=(0,1,2)).reshape((1,1,1,-1))
Lc_Grad_CAM         = np.maximum(np.sum(fmap_eval*alpha_k_c,axis=-1),0).squeeze()

## upsampling the class activation map to th esize of ht input image
scale_factor        = np.array(img.shape[:-1])/np.array(Lc_Grad_CAM.shape)
_grad_CAM           = zoom(Lc_Grad_CAM,scale_factor)
## normalize to range between 0 and 1
arr_min, arr_max    = np.min(_grad_CAM), np.max(_grad_CAM)
grad_CAM            = (_grad_CAM - arr_min) / (arr_max - arr_min + K.epsilon())
(1, 14, 14, 512)

Visualize 14 x 14 $L^c_{GRAD-CAM}$

In [11]:
plt.imshow(Lc_Grad_CAM)
plt.show()

Visualize the weights $\alpha_k^c$

Visualization of the weights explain which feature map is most important for this class.

In [12]:
plt.figure(figsize=(20,5))
plt.plot(alpha_k_c.flatten())
plt.xlabel("Feature Map at Final Convolusional Layer")
plt.ylabel("alpha_k^c")
plt.title("The {}th feature map has the largest weight alpha^k_c".format(
    np.argmax(alpha_k_c.flatten())))
plt.show()

Using the keras-vis, visualize_activation(), we can visualize the image that maximize this most influential activation map. Does it make any sense??

In [13]:
from vis.visualization import visualize_activation
activation_max = visualize_activation(model, 
                                      layer_idx      = final_fmap_index, 
                                      max_iter       = 100,
                                      verbose        = True,
                                      filter_indices = 155)
print(activation_max.shape)
plt.imshow(activation_max)
plt.show()
Iteration: 1, named_losses: [('ActivationMax Loss', -0.35818815),
 ('L-6.0 Norm Loss', 0.063259095),
 ('TV(2.0) Loss', 6439.9604)], overall loss: 6439.66552734375
Iteration: 2, named_losses: [('ActivationMax Loss', -0.44037676),
 ('L-6.0 Norm Loss', 0.0626879),
 ('TV(2.0) Loss', 3410.5918)], overall loss: 3410.214111328125
Iteration: 3, named_losses: [('ActivationMax Loss', -0.9422296),
 ('L-6.0 Norm Loss', 0.062351536),
 ('TV(2.0) Loss', 1869.8193)], overall loss: 1868.939453125
Iteration: 4, named_losses: [('ActivationMax Loss', -2.8624523),
 ('L-6.0 Norm Loss', 0.062134158),
 ('TV(2.0) Loss', 993.7063)], overall loss: 990.906005859375
Iteration: 5, named_losses: [('ActivationMax Loss', -4.975151),
 ('L-6.0 Norm Loss', 0.061991587),
 ('TV(2.0) Loss', 487.30948)], overall loss: 482.3963317871094
Iteration: 6, named_losses: [('ActivationMax Loss', -6.316314),
 ('L-6.0 Norm Loss', 0.06189976),
 ('TV(2.0) Loss', 206.2032)], overall loss: 199.94879150390625
Iteration: 7, named_losses: [('ActivationMax Loss', -6.197843),
 ('L-6.0 Norm Loss', 0.06184596),
 ('TV(2.0) Loss', 73.16422)], overall loss: 67.02822875976562
Iteration: 8, named_losses: [('ActivationMax Loss', -5.2980533),
 ('L-6.0 Norm Loss', 0.061829194),
 ('TV(2.0) Loss', 72.879616)], overall loss: 67.64339447021484
Iteration: 9, named_losses: [('ActivationMax Loss', -8.05041),
 ('L-6.0 Norm Loss', 0.061826438),
 ('TV(2.0) Loss', 103.84805)], overall loss: 95.85946655273438
Iteration: 10, named_losses: [('ActivationMax Loss', -10.212074),
 ('L-6.0 Norm Loss', 0.061817415),
 ('TV(2.0) Loss', 60.834267)], overall loss: 50.68400955200195
Iteration: 11, named_losses: [('ActivationMax Loss', -11.684986),
 ('L-6.0 Norm Loss', 0.061816312),
 ('TV(2.0) Loss', 74.29264)], overall loss: 62.669471740722656
Iteration: 12, named_losses: [('ActivationMax Loss', -12.627286),
 ('L-6.0 Norm Loss', 0.06181156),
 ('TV(2.0) Loss', 44.600338)], overall loss: 32.03486251831055
Iteration: 13, named_losses: [('ActivationMax Loss', -13.346134),
 ('L-6.0 Norm Loss', 0.06181203),
 ('TV(2.0) Loss', 56.98193)], overall loss: 43.697608947753906
Iteration: 14, named_losses: [('ActivationMax Loss', -15.427169),
 ('L-6.0 Norm Loss', 0.06181017),
 ('TV(2.0) Loss', 36.61295)], overall loss: 21.247591018676758
Iteration: 15, named_losses: [('ActivationMax Loss', -16.188047),
 ('L-6.0 Norm Loss', 0.06181123),
 ('TV(2.0) Loss', 48.453632)], overall loss: 32.327396392822266
Iteration: 16, named_losses: [('ActivationMax Loss', -18.483973),
 ('L-6.0 Norm Loss', 0.06180995),
 ('TV(2.0) Loss', 32.579414)], overall loss: 14.157251358032227
Iteration: 17, named_losses: [('ActivationMax Loss', -18.83963),
 ('L-6.0 Norm Loss', 0.061812036),
 ('TV(2.0) Loss', 44.28186)], overall loss: 25.50404167175293
Iteration: 18, named_losses: [('ActivationMax Loss', -20.962614),
 ('L-6.0 Norm Loss', 0.061811447),
 ('TV(2.0) Loss', 29.589949)], overall loss: 8.689146041870117
Iteration: 19, named_losses: [('ActivationMax Loss', -21.288435),
 ('L-6.0 Norm Loss', 0.061813705),
 ('TV(2.0) Loss', 40.55387)], overall loss: 19.32724952697754
Iteration: 20, named_losses: [('ActivationMax Loss', -24.21236),
 ('L-6.0 Norm Loss', 0.06181366),
 ('TV(2.0) Loss', 30.18169)], overall loss: 6.0311431884765625
Iteration: 21, named_losses: [('ActivationMax Loss', -24.391216),
 ('L-6.0 Norm Loss', 0.061815582),
 ('TV(2.0) Loss', 37.909035)], overall loss: 13.579633712768555
Iteration: 22, named_losses: [('ActivationMax Loss', -26.580984),
 ('L-6.0 Norm Loss', 0.061816238),
 ('TV(2.0) Loss', 30.319553)], overall loss: 3.800386428833008
Iteration: 23, named_losses: [('ActivationMax Loss', -25.376999),
 ('L-6.0 Norm Loss', 0.061817333),
 ('TV(2.0) Loss', 34.10927)], overall loss: 8.794086456298828
Iteration: 24, named_losses: [('ActivationMax Loss', -27.841225),
 ('L-6.0 Norm Loss', 0.061818667),
 ('TV(2.0) Loss', 30.031336)], overall loss: 2.2519302368164062
Iteration: 25, named_losses: [('ActivationMax Loss', -28.266108),
 ('L-6.0 Norm Loss', 0.061819986),
 ('TV(2.0) Loss', 31.588799)], overall loss: 3.384510040283203
Iteration: 26, named_losses: [('ActivationMax Loss', -27.55112),
 ('L-6.0 Norm Loss', 0.061821837),
 ('TV(2.0) Loss', 31.085579)], overall loss: 3.5962791442871094
Iteration: 27, named_losses: [('ActivationMax Loss', -30.662722),
 ('L-6.0 Norm Loss', 0.06182314),
 ('TV(2.0) Loss', 29.62585)], overall loss: -0.9750480651855469
Iteration: 28, named_losses: [('ActivationMax Loss', -30.626833),
 ('L-6.0 Norm Loss', 0.061825033),
 ('TV(2.0) Loss', 33.29346)], overall loss: 2.728452682495117
Iteration: 29, named_losses: [('ActivationMax Loss', -33.007614),
 ('L-6.0 Norm Loss', 0.061825912),
 ('TV(2.0) Loss', 28.378563)], overall loss: -4.567226409912109
Iteration: 30, named_losses: [('ActivationMax Loss', -32.19169),
 ('L-6.0 Norm Loss', 0.06182814),
 ('TV(2.0) Loss', 33.792805)], overall loss: 1.6629447937011719
Iteration: 31, named_losses: [('ActivationMax Loss', -35.641136),
 ('L-6.0 Norm Loss', 0.061828982),
 ('TV(2.0) Loss', 27.85439)], overall loss: -7.724918365478516
Iteration: 32, named_losses: [('ActivationMax Loss', -35.025715),
 ('L-6.0 Norm Loss', 0.06183143),
 ('TV(2.0) Loss', 33.384323)], overall loss: -1.579559326171875
Iteration: 33, named_losses: [('ActivationMax Loss', -36.84504),
 ('L-6.0 Norm Loss', 0.06183236),
 ('TV(2.0) Loss', 29.606686)], overall loss: -7.176521301269531
Iteration: 34, named_losses: [('ActivationMax Loss', -36.097088),
 ('L-6.0 Norm Loss', 0.061834313),
 ('TV(2.0) Loss', 32.907692)], overall loss: -3.1275634765625
Iteration: 35, named_losses: [('ActivationMax Loss', -38.90753),
 ('L-6.0 Norm Loss', 0.06183611),
 ('TV(2.0) Loss', 27.70351)], overall loss: -11.14218521118164
Iteration: 36, named_losses: [('ActivationMax Loss', -37.883377),
 ('L-6.0 Norm Loss', 0.061838705),
 ('TV(2.0) Loss', 33.296364)], overall loss: -4.525173187255859
Iteration: 37, named_losses: [('ActivationMax Loss', -40.789547),
 ('L-6.0 Norm Loss', 0.061839797),
 ('TV(2.0) Loss', 29.328718)], overall loss: -11.398988723754883
Iteration: 38, named_losses: [('ActivationMax Loss', -41.063004),
 ('L-6.0 Norm Loss', 0.06184205),
 ('TV(2.0) Loss', 34.58162)], overall loss: -6.4195404052734375
Iteration: 39, named_losses: [('ActivationMax Loss', -41.856506),
 ('L-6.0 Norm Loss', 0.061843567),
 ('TV(2.0) Loss', 31.153748)], overall loss: -10.640914916992188
Iteration: 40, named_losses: [('ActivationMax Loss', -42.543537),
 ('L-6.0 Norm Loss', 0.061845493),
 ('TV(2.0) Loss', 32.888245)], overall loss: -9.593448638916016
Iteration: 41, named_losses: [('ActivationMax Loss', -44.160706),
 ('L-6.0 Norm Loss', 0.061846614),
 ('TV(2.0) Loss', 32.101536)], overall loss: -11.997322082519531
Iteration: 42, named_losses: [('ActivationMax Loss', -44.098034),
 ('L-6.0 Norm Loss', 0.061848957),
 ('TV(2.0) Loss', 32.85674)], overall loss: -11.179447174072266
Iteration: 43, named_losses: [('ActivationMax Loss', -45.21799),
 ('L-6.0 Norm Loss', 0.06185035),
 ('TV(2.0) Loss', 32.25161)], overall loss: -12.904529571533203
Iteration: 44, named_losses: [('ActivationMax Loss', -45.37761),
 ('L-6.0 Norm Loss', 0.061852522),
 ('TV(2.0) Loss', 32.02928)], overall loss: -13.286476135253906
Iteration: 45, named_losses: [('ActivationMax Loss', -46.837494),
 ('L-6.0 Norm Loss', 0.06185354),
 ('TV(2.0) Loss', 33.078358)], overall loss: -13.697280883789062
Iteration: 46, named_losses: [('ActivationMax Loss', -46.50571),
 ('L-6.0 Norm Loss', 0.06185583),
 ('TV(2.0) Loss', 33.58957)], overall loss: -12.854286193847656
Iteration: 47, named_losses: [('ActivationMax Loss', -47.61179),
 ('L-6.0 Norm Loss', 0.06185687),
 ('TV(2.0) Loss', 32.168194)], overall loss: -15.38174057006836
Iteration: 48, named_losses: [('ActivationMax Loss', -47.305046),
 ('L-6.0 Norm Loss', 0.061859705),
 ('TV(2.0) Loss', 32.40638)], overall loss: -14.836807250976562
Iteration: 49, named_losses: [('ActivationMax Loss', -49.01131),
 ('L-6.0 Norm Loss', 0.061861254),
 ('TV(2.0) Loss', 32.785862)], overall loss: -16.163585662841797
Iteration: 50, named_losses: [('ActivationMax Loss', -47.58678),
 ('L-6.0 Norm Loss', 0.061862793),
 ('TV(2.0) Loss', 33.474606)], overall loss: -14.050312042236328
Iteration: 51, named_losses: [('ActivationMax Loss', -51.33822),
 ('L-6.0 Norm Loss', 0.061865244),
 ('TV(2.0) Loss', 32.75823)], overall loss: -18.518123626708984
Iteration: 52, named_losses: [('ActivationMax Loss', -49.49416),
 ('L-6.0 Norm Loss', 0.061866812),
 ('TV(2.0) Loss', 35.458675)], overall loss: -13.973617553710938
Iteration: 53, named_losses: [('ActivationMax Loss', -51.18779),
 ('L-6.0 Norm Loss', 0.061869424),
 ('TV(2.0) Loss', 31.446123)], overall loss: -19.67979621887207
Iteration: 54, named_losses: [('ActivationMax Loss', -50.919502),
 ('L-6.0 Norm Loss', 0.06187116),
 ('TV(2.0) Loss', 35.018314)], overall loss: -15.839317321777344
Iteration: 55, named_losses: [('ActivationMax Loss', -51.722084),
 ('L-6.0 Norm Loss', 0.061873145),
 ('TV(2.0) Loss', 32.902523)], overall loss: -18.757686614990234
Iteration: 56, named_losses: [('ActivationMax Loss', -53.026726),
 ('L-6.0 Norm Loss', 0.06187505),
 ('TV(2.0) Loss', 33.565887)], overall loss: -19.398963928222656
Iteration: 57, named_losses: [('ActivationMax Loss', -51.47165),
 ('L-6.0 Norm Loss', 0.06187718),
 ('TV(2.0) Loss', 34.065628)], overall loss: -17.34414291381836
Iteration: 58, named_losses: [('ActivationMax Loss', -54.773613),
 ('L-6.0 Norm Loss', 0.061879627),
 ('TV(2.0) Loss', 33.848183)], overall loss: -20.86355209350586
Iteration: 59, named_losses: [('ActivationMax Loss', -52.233387),
 ('L-6.0 Norm Loss', 0.06188159),
 ('TV(2.0) Loss', 33.9559)], overall loss: -18.215606689453125
Iteration: 60, named_losses: [('ActivationMax Loss', -54.713715),
 ('L-6.0 Norm Loss', 0.061883904),
 ('TV(2.0) Loss', 33.561916)], overall loss: -21.089916229248047
Iteration: 61, named_losses: [('ActivationMax Loss', -54.04194),
 ('L-6.0 Norm Loss', 0.061886266),
 ('TV(2.0) Loss', 34.24786)], overall loss: -19.732192993164062
Iteration: 62, named_losses: [('ActivationMax Loss', -56.14385),
 ('L-6.0 Norm Loss', 0.06188884),
 ('TV(2.0) Loss', 34.826656)], overall loss: -21.25530242919922
Iteration: 63, named_losses: [('ActivationMax Loss', -55.109047),
 ('L-6.0 Norm Loss', 0.06189027),
 ('TV(2.0) Loss', 35.316498)], overall loss: -19.73065948486328
Iteration: 64, named_losses: [('ActivationMax Loss', -57.445473),
 ('L-6.0 Norm Loss', 0.06189284),
 ('TV(2.0) Loss', 33.987045)], overall loss: -23.396533966064453
Iteration: 65, named_losses: [('ActivationMax Loss', -56.995464),
 ('L-6.0 Norm Loss', 0.061894476),
 ('TV(2.0) Loss', 35.909576)], overall loss: -21.02399444580078
Iteration: 66, named_losses: [('ActivationMax Loss', -57.921825),
 ('L-6.0 Norm Loss', 0.06189742),
 ('TV(2.0) Loss', 34.957386)], overall loss: -22.902542114257812
Iteration: 67, named_losses: [('ActivationMax Loss', -57.928043),
 ('L-6.0 Norm Loss', 0.06189916),
 ('TV(2.0) Loss', 36.037025)], overall loss: -21.829120635986328
Iteration: 68, named_losses: [('ActivationMax Loss', -59.13429),
 ('L-6.0 Norm Loss', 0.061902106),
 ('TV(2.0) Loss', 34.52235)], overall loss: -24.550037384033203
Iteration: 69, named_losses: [('ActivationMax Loss', -57.803562),
 ('L-6.0 Norm Loss', 0.06190379),
 ('TV(2.0) Loss', 36.478218)], overall loss: -21.263439178466797
Iteration: 70, named_losses: [('ActivationMax Loss', -58.864944),
 ('L-6.0 Norm Loss', 0.061906267),
 ('TV(2.0) Loss', 34.874523)], overall loss: -23.928516387939453
Iteration: 71, named_losses: [('ActivationMax Loss', -59.66284),
 ('L-6.0 Norm Loss', 0.061907936),
 ('TV(2.0) Loss', 36.819305)], overall loss: -22.781627655029297
Iteration: 72, named_losses: [('ActivationMax Loss', -60.796066),
 ('L-6.0 Norm Loss', 0.061910108),
 ('TV(2.0) Loss', 36.40676)], overall loss: -24.327396392822266
Iteration: 73, named_losses: [('ActivationMax Loss', -59.30462),
 ('L-6.0 Norm Loss', 0.061912198),
 ('TV(2.0) Loss', 37.02406)], overall loss: -22.218647003173828
Iteration: 74, named_losses: [('ActivationMax Loss', -60.653553),
 ('L-6.0 Norm Loss', 0.061914705),
 ('TV(2.0) Loss', 35.32113)], overall loss: -25.2705078125
Iteration: 75, named_losses: [('ActivationMax Loss', -60.822075),
 ('L-6.0 Norm Loss', 0.061916765),
 ('TV(2.0) Loss', 36.979866)], overall loss: -23.780292510986328
Iteration: 76, named_losses: [('ActivationMax Loss', -61.522797),
 ('L-6.0 Norm Loss', 0.061918672),
 ('TV(2.0) Loss', 36.057228)], overall loss: -25.403648376464844
Iteration: 77, named_losses: [('ActivationMax Loss', -61.752033),
 ('L-6.0 Norm Loss', 0.061921224),
 ('TV(2.0) Loss', 37.473194)], overall loss: -24.2169189453125
Iteration: 78, named_losses: [('ActivationMax Loss', -61.53927),
 ('L-6.0 Norm Loss', 0.061922748),
 ('TV(2.0) Loss', 36.889294)], overall loss: -24.588050842285156
Iteration: 79, named_losses: [('ActivationMax Loss', -62.556854),
 ('L-6.0 Norm Loss', 0.06192559),
 ('TV(2.0) Loss', 36.504402)], overall loss: -25.990528106689453
Iteration: 80, named_losses: [('ActivationMax Loss', -62.450703),
 ('L-6.0 Norm Loss', 0.06192677),
 ('TV(2.0) Loss', 38.38316)], overall loss: -24.005615234375
Iteration: 81, named_losses: [('ActivationMax Loss', -64.686356),
 ('L-6.0 Norm Loss', 0.06192929),
 ('TV(2.0) Loss', 37.06512)], overall loss: -27.559307098388672
Iteration: 82, named_losses: [('ActivationMax Loss', -63.3728),
 ('L-6.0 Norm Loss', 0.061930973),
 ('TV(2.0) Loss', 38.500034)], overall loss: -24.810832977294922
Iteration: 83, named_losses: [('ActivationMax Loss', -64.334885),
 ('L-6.0 Norm Loss', 0.06193363),
 ('TV(2.0) Loss', 38.02253)], overall loss: -26.25041961669922
Iteration: 84, named_losses: [('ActivationMax Loss', -63.97051),
 ('L-6.0 Norm Loss', 0.06193497),
 ('TV(2.0) Loss', 38.001305)], overall loss: -25.907268524169922
Iteration: 85, named_losses: [('ActivationMax Loss', -65.736946),
 ('L-6.0 Norm Loss', 0.06193744),
 ('TV(2.0) Loss', 38.793644)], overall loss: -26.881366729736328
Iteration: 86, named_losses: [('ActivationMax Loss', -63.322456),
 ('L-6.0 Norm Loss', 0.061939005),
 ('TV(2.0) Loss', 38.0356)], overall loss: -25.224918365478516
Iteration: 87, named_losses: [('ActivationMax Loss', -66.317184),
 ('L-6.0 Norm Loss', 0.061940882),
 ('TV(2.0) Loss', 36.849716)], overall loss: -29.40552520751953
Iteration: 88, named_losses: [('ActivationMax Loss', -63.686314),
 ('L-6.0 Norm Loss', 0.061943777),
 ('TV(2.0) Loss', 39.282154)], overall loss: -24.34221649169922
Iteration: 89, named_losses: [('ActivationMax Loss', -67.18377),
 ('L-6.0 Norm Loss', 0.061945647),
 ('TV(2.0) Loss', 37.73858)], overall loss: -29.38324737548828
Iteration: 90, named_losses: [('ActivationMax Loss', -65.48782),
 ('L-6.0 Norm Loss', 0.061947286),
 ('TV(2.0) Loss', 39.200436)], overall loss: -26.22543716430664
Iteration: 91, named_losses: [('ActivationMax Loss', -68.209206),
 ('L-6.0 Norm Loss', 0.0619492),
 ('TV(2.0) Loss', 38.887302)], overall loss: -29.259952545166016
Iteration: 92, named_losses: [('ActivationMax Loss', -66.36849),
 ('L-6.0 Norm Loss', 0.06195143),
 ('TV(2.0) Loss', 39.84417)], overall loss: -26.462371826171875
Iteration: 93, named_losses: [('ActivationMax Loss', -68.88901),
 ('L-6.0 Norm Loss', 0.06195292),
 ('TV(2.0) Loss', 38.19496)], overall loss: -30.632095336914062
Iteration: 94, named_losses: [('ActivationMax Loss', -65.81315),
 ('L-6.0 Norm Loss', 0.06195561),
 ('TV(2.0) Loss', 39.864742)], overall loss: -25.88644790649414
Iteration: 95, named_losses: [('ActivationMax Loss', -68.622635),
 ('L-6.0 Norm Loss', 0.061956417),
 ('TV(2.0) Loss', 37.523026)], overall loss: -31.03765106201172
Iteration: 96, named_losses: [('ActivationMax Loss', -66.385475),
 ('L-6.0 Norm Loss', 0.06195886),
 ('TV(2.0) Loss', 40.00013)], overall loss: -26.323387145996094
Iteration: 97, named_losses: [('ActivationMax Loss', -68.71926),
 ('L-6.0 Norm Loss', 0.061961006),
 ('TV(2.0) Loss', 38.07462)], overall loss: -30.582683563232422
Iteration: 98, named_losses: [('ActivationMax Loss', -67.866035),
 ('L-6.0 Norm Loss', 0.061963297),
 ('TV(2.0) Loss', 39.65763)], overall loss: -28.146438598632812
Iteration: 99, named_losses: [('ActivationMax Loss', -69.81586),
 ('L-6.0 Norm Loss', 0.061966203),
 ('TV(2.0) Loss', 39.568977)], overall loss: -30.184913635253906
Iteration: 100, named_losses: [('ActivationMax Loss', -67.02353),
 ('L-6.0 Norm Loss', 0.061967902),
 ('TV(2.0) Loss', 39.62816)], overall loss: -27.333404541015625
(224, 224, 3)

Make sure that the hand-calculated grad-CAM is the same as the output of keras-vis

In [14]:
assert np.all(np.abs(grad_CAM  - grad_top1) < 0.0001)
In [15]:
plot_map(grad_CAM)

Comments