This page was generated from examples/cd_text_imdb.ipynb.

Text drift detection on IMDB movie reviews

Method

We detect drift on text data using both the Maximum Mean Discrepancy and Kolmogorov-Smirnov (K-S) detectors. In this example notebook we will focus on detecting covariate shift \(\Delta p(x)\) as detecting predicted label distribution drift does not differ from other modalities (check K-S and MMD drift on CIFAR-10).

It becomes however a little bit more involved when we want to pick up input data drift \(\Delta p(x)\). When we deal with tabular or image data, we can either directly apply the two sample hypothesis test on the input or do the test after a preprocessing step with for instance a randomly initialized encoder as proposed in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift (they call it an Untrained AutoEncoder or UAE). It is not as straightforward when dealing with text, both in string or tokenized format as they don’t directly represent the semantics of the input.

As a result, we extract (contextual) embeddings for the text and detect drift on those. This procedure has a significant impact on the type of drift we detect. Strictly speaking we are not detecting \(\Delta p(x)\) anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract.

The library contains functionality to leverage pre-trained embeddings from HuggingFace’s transformer package but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in this notebook.

Backend

The method works with both the PyTorch and TensorFlow frameworks for the statistical tests and preprocessing steps. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.

Dataset

Binary sentiment classification dataset containing \(25,000\) movie reviews for training and \(25,000\) for testing. Install the nlp library to fetch the dataset:

[ ]:
!pip install nlp
[1]:
import nlp
import numpy as np
import os
import tensorflow as tf
from transformers import AutoTokenizer
from alibi_detect.cd import KSDrift, MMDDrift
from alibi_detect.utils.saving import save_detector, load_detector

Load tokenizer

[2]:
model_name = 'bert-base-cased'
tokenizer = AutoTokenizer.from_pretrained(model_name)

Load data

[3]:
def load_dataset(dataset: str, split: str = 'test'):
    data = nlp.load_dataset(dataset)
    X, y = [], []
    for x in data[split]:
        X.append(x['text'])
        y.append(x['label'])
    X = np.array(X)
    y = np.array(y)
    return X, y
[4]:
X, y = load_dataset('imdb', split='train')
print(X.shape, y.shape)
INFO:nlp.load:Checking /home/avl/.cache/huggingface/datasets/d3b7716978cb901261e59327d43b04c52d6d29e50eeac39bea0816865a584081.7c39fd6270c5ee55bcf2e4de23af77ef299e0df65be3f3e84454dcef7175844a.py for additional imports.
INFO:filelock:Lock 140070637965264 acquired on /home/avl/.cache/huggingface/datasets/d3b7716978cb901261e59327d43b04c52d6d29e50eeac39bea0816865a584081.7c39fd6270c5ee55bcf2e4de23af77ef299e0df65be3f3e84454dcef7175844a.py.lock
INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py at /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb
INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py at /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743
INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py to /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.py
INFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/dataset_infos.json to /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/dataset_infos.json
INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py at /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.json
INFO:filelock:Lock 140070637965264 released on /home/avl/.cache/huggingface/datasets/d3b7716978cb901261e59327d43b04c52d6d29e50eeac39bea0816865a584081.7c39fd6270c5ee55bcf2e4de23af77ef299e0df65be3f3e84454dcef7175844a.py.lock
INFO:nlp.builder:No config specified, defaulting to first: imdb/plain_text
INFO:nlp.info:Loading Dataset Infos from /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743
INFO:nlp.builder:Overwrite dataset info from restored data version.
INFO:nlp.info:Loading Dataset info from /home/avl/.cache/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743
INFO:nlp.builder:Reusing dataset imdb (/home/avl/.cache/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743)
INFO:nlp.builder:Constructing Dataset for split train, test, unsupervised, from /home/avl/.cache/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743
INFO:nlp.utils.info_utils:All the checksums matched successfully for post processing resources
INFO:nlp.utils.info_utils:All the checksums matched successfully for post processing resources
INFO:nlp.utils.info_utils:All the checksums matched successfully for post processing resources
(25000,) (25000,)

Let’s take a look at respectively a negative and positive review:

[5]:
labels = ['Negative', 'Positive']
print(labels[y[-1]])
print(X[-1])
Negative
This is one of the dumbest films, I've ever seen. It rips off nearly ever type of thriller and manages to make a mess of them all.<br /><br />There's not a single good line or character in the whole mess. If there was a plot, it was an afterthought and as far as acting goes, there's nothing good to say so Ill say nothing. I honestly cant understand how this type of nonsense gets produced and actually released, does somebody somewhere not at some stage think, 'Oh my god this really is a load of shite' and call it a day. Its crap like this that has people downloading illegally, the trailer looks like a completely different film, at least if you have download it, you haven't wasted your time or money Don't waste your time, this is painful.
[6]:
print(labels[y[2]])
print(X[2])
Positive
Brilliant over-acting by Lesley Ann Warren. Best dramatic hobo lady I have ever seen, and love scenes in clothes warehouse are second to none. The corn on face is a classic, as good as anything in Blazing Saddles. The take on lawyers is also superb. After being accused of being a turncoat, selling out his boss, and being dishonest the lawyer of Pepto Bolt shrugs indifferently "I'm a lawyer" he says. Three funny words. Jeffrey Tambor, a favorite from the later Larry Sanders show, is fantastic here too as a mad millionaire who wants to crush the ghetto. His character is more malevolent than usual. The hospital scene, and the scene where the homeless invade a demolition site, are all-time classics. Look for the legs scene and the two big diggers fighting (one bleeds). This movie gets better each time I see it (which is quite often).

We split the original test set in a reference dataset and a dataset which should not be rejected under the H0 of the statistical test. We also create imbalanced datasets and inject selected words in the reference set.

[7]:
def random_sample(X: np.ndarray, y: np.ndarray, proba_zero: float, n: int):
    if len(y.shape) == 1:
        idx_0 = np.where(y == 0)[0]
        idx_1 = np.where(y == 1)[0]
    else:
        idx_0 = np.where(y[:, 0] == 1)[0]
        idx_1 = np.where(y[:, 1] == 1)[0]
    n_0, n_1 = int(n * proba_zero), int(n * (1 - proba_zero))
    idx_0_out = np.random.choice(idx_0, n_0, replace=False)
    idx_1_out = np.random.choice(idx_1, n_1, replace=False)
    X_out = np.concatenate([X[idx_0_out], X[idx_1_out]])
    y_out = np.concatenate([y[idx_0_out], y[idx_1_out]])
    return X_out, y_out


def padding_last(x: np.ndarray, seq_len: int) -> np.ndarray:
    try:  # try not to replace padding token
        last_token = np.where(x == 0)[0][0]
    except:  # no padding
        last_token = seq_len - 1
    return 1, last_token


def padding_first(x: np.ndarray, seq_len: int) -> np.ndarray:
    try:  # try not to replace padding token
        first_token = np.where(x == 0)[0][-1] + 2
    except:  # no padding
        first_token = 0
    return first_token, seq_len - 1


def inject_word(token: int, X: np.ndarray, perc_chg: float, padding: str = 'last'):
    seq_len = X.shape[1]
    n_chg = int(perc_chg * .01 * seq_len)
    X_cp = X.copy()
    for _ in range(X.shape[0]):
        if padding == 'last':
            first_token, last_token = padding_last(X_cp[_, :], seq_len)
        else:
            first_token, last_token = padding_first(X_cp[_, :], seq_len)
        if last_token <= n_chg:
            choice_len = seq_len
        else:
            choice_len = last_token
        idx = np.random.choice(np.arange(first_token, choice_len), n_chg, replace=False)
        X_cp[_, idx] = token
    return X_cp

Reference, H0 and imbalanced data:

[8]:
# proba_zero = fraction with label 0 (=negative sentiment)
n_sample = 1000
X_ref = random_sample(X, y, proba_zero=.5, n=n_sample)[0]
X_h0 = random_sample(X, y, proba_zero=.5, n=n_sample)[0]
n_imb = [.1, .9]
X_imb = {_: random_sample(X, y, proba_zero=_, n=n_sample)[0] for _ in n_imb}

Inject words in reference data:

[9]:
words = ['fantastic', 'good', 'bad', 'horrible']
perc_chg = [1., 5.]  # % of tokens to change in an instance

words_tf = tokenizer(words)['input_ids']
words_tf = [token[1:-1][0] for token in words_tf]
max_len = 100
tokens = tokenizer(list(X_ref), pad_to_max_length=True,
                   max_length=max_len, return_tensors='tf')
X_word = {}
for i, w in enumerate(words_tf):
    X_word[words[i]] = {}
    for p in perc_chg:
        x = inject_word(w, tokens['input_ids'].numpy(), p)
        dec = tokenizer.batch_decode(x, **dict(skip_special_tokens=True))
        X_word[words[i]][p] = np.array(dec)
Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/transformers/tokenization_utils_base.py:2079: FutureWarning:

The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).

Preprocessing

First we need to specify the type of embedding we want to extract from the BERT model. We can extract embeddings from the …

  • pooler_output: Last layer hidden-state of the first token of the sequence (classification token; CLS) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training. Note: this output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.

  • last_hidden_state: Sequence of hidden states at the output of the last layer of the model, averaged over the tokens.

  • hidden_state: Hidden states of the model at the output of each layer, averaged over the tokens.

  • hidden_state_cls: See hidden_state but use the CLS token output.

If hidden_state or hidden_state_cls is used as embedding type, you also need to pass the layer numbers used to extract the embedding from. As an example we extract embeddings from the last 8 hidden states.

[10]:
from alibi_detect.models.tensorflow import TransformerEmbedding

emb_type = 'hidden_state'
n_layers = 8
layers = [-_ for _ in range(1, n_layers + 1)]

embedding = TransformerEmbedding(model_name, emb_type, layers)
Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFBertModel were initialized from the model checkpoint at bert-base-cased.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.

Let’s check what an embedding looks like:

[11]:
tokens = tokenizer(list(X[:5]), pad_to_max_length=True,
                   max_length=max_len, return_tensors='tf')
x_emb = embedding(tokens)
print(x_emb.shape)
(5, 768)

So the BERT model’s embedding space used by the drift detector consists of a \(768\)-dimensional vector for each instance. We will therefore first apply a dimensionality reduction step with an Untrained AutoEncoder (UAE) before conducting the statistical hypothesis test. We use the embedding model as the input for the UAE which then projects the embedding on a lower dimensional space.

[12]:
tf.random.set_seed(0)
[13]:
from alibi_detect.cd.tensorflow import UAE

enc_dim = 32
shape = (x_emb.shape[1],)

uae = UAE(input_layer=embedding, shape=shape, enc_dim=enc_dim)

Let’s test this again:

[14]:
emb_uae = uae(tokens)
print(emb_uae.shape)
(5, 32)

K-S detector

Initialize

We proceed to initialize the drift detector. From here on the detector works the same as for other modalities such as images. Please check the images example or the K-S detector documentation for more information about each of the possible parameters.

[15]:
from functools import partial
from alibi_detect.cd.tensorflow import preprocess_drift

# define preprocessing function
preprocess_fn = partial(preprocess_drift, model=uae, tokenizer=tokenizer,
                        max_len=max_len, batch_size=32)

# initialize detector
cd = KSDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn, input_shape=(max_len,))

# we can also save/load an initialised detector
filepath = 'my_path'  # change to directory where detector is saved
save_detector(cd, filepath)
cd = load_detector(filepath)
WARNING:alibi_detect.utils.saving:Directory my_path does not exist and is now created.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f66a01c4280>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f66a01c4280>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f66a01c4280>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
All model checkpoint layers were used when initializing TFBertModel.

All the layers of TFBertModel were initialized from the model checkpoint at my_path/model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.
WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually.
WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually.

Detect drift

Let’s first check if drift occurs on a similar sample from the training set as the reference data.

[16]:
preds_h0 = cd.predict(X_h0)
labels = ['No!', 'Yes!']
print('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))
print('p-value: {}'.format(preds_h0['data']['p_val']))
Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
Drift? No!
p-value: [0.85929435 0.46576622 0.79439443 0.82795686 0.722555   0.7590978
 0.46576622 0.82795686 0.02246371 0.21933001 0.6852314  0.8879386
 0.5726548  0.5726548  0.50035924 0.9801618  0.9540582  0.3699725
 0.82795686 0.07762147 0.7590978  0.9134755  0.9540582  0.40047103
 0.06155144 0.28769323 0.5726548  0.64755726 0.5726548  0.2406036
 0.722555   0.00282894]

Detect drift on imbalanced and perturbed datasets:

[17]:
for k, v in X_imb.items():
    preds = cd.predict(v)
    print('% negative sentiment {}'.format(k * 100))
    print('Drift? {}'.format(labels[preds['data']['is_drift']]))
    print('p-value: {}'.format(preds['data']['p_val']))
    print('')
% negative sentiment 10.0
Drift? Yes!
p-value: [4.32430744e-01 7.22554982e-01 6.85231388e-01 6.09918952e-01
 1.81119651e-01 6.20218972e-03 1.22740539e-03 1.12110768e-02
 1.63965786e-04 3.69972497e-01 5.36054313e-01 8.59294355e-01
 8.27956855e-01 1.81119651e-01 4.00471032e-01 6.47557259e-01
 7.76214674e-02 1.20504074e-01 1.33834302e-01 2.24637091e-02
 6.47557259e-01 8.69054198e-02 7.76214674e-02 7.94394433e-01
 1.20504074e-01 3.40991944e-01 5.72654784e-01 2.87693232e-01
 2.87693232e-01 5.72654784e-01 1.99518353e-01 2.87693232e-01]

% negative sentiment 90.0
Drift? Yes!
p-value: [5.9607941e-01 3.1773445e-01 1.0167704e-01 6.0131598e-01 4.7765803e-03
 7.8468665e-02 5.4378760e-01 3.1890289e-04 4.7273561e-02 2.3027392e-01
 3.5409841e-01 2.2440368e-01 4.5503160e-01 8.8078308e-01 7.5261140e-01
 6.5092611e-01 3.8073459e-01 5.4552953e-04 6.6255075e-01 6.9101667e-01
 3.9483134e-02 8.2559012e-02 3.2168049e-01 1.9095013e-01 7.0450002e-01
 1.5517529e-06 9.7765464e-01 9.8889194e-02 6.3466263e-01 2.9970827e-02
 1.7626658e-01 5.0656848e-02]

[18]:
for w, probas in X_word.items():
    for p, v in probas.items():
        preds = cd.predict(v)
        print('Word: {} -- % perturbed: {}'.format(w, p))
        print('Drift? {}'.format(labels[preds['data']['is_drift']]))
        print('p-value: {}'.format(preds['data']['p_val']))
        print('')
Word: fantastic -- % perturbed: 1.0
Drift? No!
p-value: [0.9540582  0.01293455 0.26338065 0.722555   0.34099194 0.04281518
 0.04841881 0.31356168 0.14833806 0.96887016 0.85929435 0.50035924
 0.00532228 0.8879386  0.9998709  0.99870795 0.85929435 0.9882611
 0.06155144 0.7590978  0.79439443 0.2406036  0.10828251 0.722555
 0.28769323 0.18111965 0.9134755  0.996931   0.18111965 0.07762147
 0.9540582  0.5726548 ]

Word: fantastic -- % perturbed: 5.0
Drift? Yes!
p-value: [4.55808453e-03 4.14164800e-17 2.43227714e-08 6.85231388e-01
 3.18301190e-08 1.26629300e-17 1.10562748e-09 1.71140861e-02
 1.69780876e-14 3.50604125e-04 1.48931602e-02 1.84965307e-10
 0.00000000e+00 1.48931602e-02 9.93654132e-01 1.08282514e-01
 3.40991944e-01 2.19330013e-01 2.14098059e-19 7.76214674e-02
 3.25786677e-05 1.69780876e-14 1.09291570e-20 6.15514442e-02
 8.36122004e-23 4.56308130e-10 1.20504074e-01 4.00471032e-01
 2.86754206e-33 7.08821891e-21 2.26972293e-06 7.42663324e-05]

Word: good -- % perturbed: 1.0
Drift? Yes!
p-value: [2.1933001e-01 9.1347551e-01 1.3383430e-01 9.9954331e-01 2.6338065e-01
 9.9954331e-01 6.0991895e-01 8.8793862e-01 9.6887016e-01 5.7265478e-01
 9.3558097e-01 5.3605431e-01 9.7104527e-02 9.9870795e-01 9.3558097e-01
 4.6576622e-01 9.9987090e-01 2.6338065e-01 9.9693102e-01 1.1211077e-02
 9.3558097e-01 9.1347551e-01 6.0991895e-01 7.2255498e-01 6.0991895e-01
 9.9870795e-01 9.6887016e-01 9.9870795e-01 5.7265478e-01 4.2185336e-04
 9.9365413e-01 9.8016179e-01]

Word: good -- % perturbed: 5.0
Drift? Yes!
p-value: [2.86769516e-16 9.98707950e-01 4.91978077e-19 7.94394433e-01
 4.64324268e-09 7.22554982e-01 3.50604125e-04 3.40991944e-01
 1.34916729e-04 2.09715821e-11 6.47557259e-01 4.21853358e-04
 1.65277426e-33 5.46463318e-02 3.40991944e-01 1.84965307e-10
 5.36054313e-01 1.00300261e-10 9.80161786e-01 1.69780876e-14
 9.13475513e-01 3.27475419e-07 2.54783203e-07 3.32311448e-03
 1.34916729e-04 1.20504074e-01 6.15514442e-02 7.94394433e-01
 1.18559271e-07 0.00000000e+00 2.82894098e-03 1.64079204e-01]

Word: bad -- % perturbed: 1.0
Drift? No!
p-value: [0.6852314  0.40047103 0.1338343  0.9882611  0.50035924 0.9882611
 0.9999727  0.8879386  0.8879386  0.46576622 0.8879386  0.85929435
 0.01962691 0.9540582  0.9998709  0.40047103 0.21933001 0.01962691
 0.6852314  0.18111965 0.31356168 0.6852314  0.14833806 0.9134755
 0.93558097 0.99870795 0.9999727  0.99365413 0.722555   0.21933001
 0.06155144 0.9998709 ]

Word: bad -- % perturbed: 5.0
Drift? Yes!
p-value: [8.2482254e-10 2.8794037e-11 8.4967083e-18 2.4060360e-01 3.2578668e-05
 2.4060360e-01 5.7265478e-01 1.4833806e-01 3.6098195e-06 1.3007273e-15
 3.1356168e-01 4.1571425e-08 1.0593816e-42 7.2607823e-04 2.4060360e-01
 1.7114086e-02 1.8548947e-08 4.5879536e-21 1.8111965e-01 1.9783097e-07
 8.4248814e-24 4.6432427e-09 2.8676952e-16 7.2131259e-03 4.3243074e-01
 9.1347551e-01 1.6407920e-01 1.4563050e-03 5.3955968e-11 6.1319246e-16
 4.9197808e-19 9.8016179e-01]

Word: horrible -- % perturbed: 1.0
Drift? Yes!
p-value: [0.26338065 0.9995433  0.99870795 0.9540582  0.7590978  0.722555
 0.9999727  0.9134755  0.00145631 0.99870795 0.9995433  0.64755726
 0.09710453 0.99870795 0.5360543  0.99870795 0.04281518 0.1338343
 0.82795686 0.1338343  0.1640792  0.9134755  0.43243074 0.9801618
 0.9995433  0.1338343  0.99365413 0.9999727  0.9998709  0.00203786
 0.1640792  0.7590978 ]

Word: horrible -- % perturbed: 5.0
Drift? Yes!
p-value: [1.26629300e-17 5.36054313e-01 1.20504074e-01 8.27956855e-01
 7.26078229e-04 9.69783217e-03 4.84188050e-02 6.07078255e-04
 3.21035236e-38 4.01514189e-05 2.87693232e-01 1.84965307e-10
 5.41929480e-39 1.64079204e-01 1.63965786e-04 1.48338065e-01
 1.41174699e-08 1.98871276e-04 4.56308130e-10 1.95523170e-16
 6.34892210e-31 2.54783203e-07 9.03489017e-17 9.80161786e-01
 4.15714254e-08 4.95470906e-14 9.13475513e-01 1.98871276e-04
 5.62237052e-13 0.00000000e+00 2.79573150e-17 1.71140861e-02]

MMD TensorFlow detector

Initialize

Again check the images example or the MMD detector documentation for more information about each of the possible parameters.

[19]:
cd = MMDDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn,
              n_permutations=100, input_shape=(max_len,))

Detect drift

H0:

[20]:
preds_h0 = cd.predict(X_h0)
labels = ['No!', 'Yes!']
print('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))
print('p-value: {}'.format(preds_h0['data']['p_val']))
Drift? No!
p-value: 0.9

Imbalanced data:

[21]:
for k, v in X_imb.items():
    preds = cd.predict(v)
    print('% negative sentiment {}'.format(k * 100))
    print('Drift? {}'.format(labels[preds['data']['is_drift']]))
    print('p-value: {}'.format(preds['data']['p_val']))
    print('')
% negative sentiment 10.0
Drift? Yes!
p-value: 0.0

% negative sentiment 90.0
Drift? Yes!
p-value: 0.0

Perturbed data:

[22]:
for w, probas in X_word.items():
    for p, v in probas.items():
        preds = cd.predict(v)
        print('Word: {} -- % perturbed: {}'.format(w, p))
        print('Drift? {}'.format(labels[preds['data']['is_drift']]))
        print('p-value: {}'.format(preds['data']['p_val']))
        print('')
Word: fantastic -- % perturbed: 1.0
Drift? Yes!
p-value: 0.01

Word: fantastic -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

Word: good -- % perturbed: 1.0
Drift? No!
p-value: 0.57

Word: good -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

Word: bad -- % perturbed: 1.0
Drift? No!
p-value: 0.4

Word: bad -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

Word: horrible -- % perturbed: 1.0
Drift? No!
p-value: 0.08

Word: horrible -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

MMD PyTorch detector

Initialize

We can run the same detector with PyTorch backend for both the preprocessing step and MMD implementation:

[23]:
import torch
import torch.nn as nn

# set random seed and device
seed = 0
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
cuda
[24]:
from alibi_detect.cd.pytorch import preprocess_drift
from alibi_detect.models.pytorch import TransformerEmbedding

embedding_pt = TransformerEmbedding(model_name, emb_type, layers)

model = nn.Sequential(
    embedding_pt,
    nn.Linear(768, 256),
    nn.ReLU(),
    nn.Linear(256, enc_dim)
).to(device).eval()

# define preprocessing function
preprocess_fn = partial(preprocess_drift, model=model, tokenizer=tokenizer,
                        max_len=max_len, batch_size=32)

# initialise drift detector
cd = MMDDrift(X_ref, backend='pytorch', p_val=.05, preprocess_fn=preprocess_fn,
              n_permutations=100, input_shape=(max_len,))
INFO:filelock:Lock 140068554309968 acquired on /home/avl/.cache/huggingface/transformers/092cc582560fc3833e556b3f833695c26343cb54b7e88cd02d40821462a74999.1f48cab6c959fc6c360d22bea39d06959e90f5b002e77e836d2da45464875cda.lock
INFO:filelock:Lock 140068554309968 released on /home/avl/.cache/huggingface/transformers/092cc582560fc3833e556b3f833695c26343cb54b7e88cd02d40821462a74999.1f48cab6c959fc6c360d22bea39d06959e90f5b002e77e836d2da45464875cda.lock

Detect drift

H0:

[25]:
preds_h0 = cd.predict(X_h0)
labels = ['No!', 'Yes!']
print('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))
print('p-value: {}'.format(preds_h0['data']['p_val']))
Drift? No!
p-value: 0.3400000035762787

Imbalanced data:

[26]:
for k, v in X_imb.items():
    preds = cd.predict(v)
    print('% negative sentiment {}'.format(k * 100))
    print('Drift? {}'.format(labels[preds['data']['is_drift']]))
    print('p-value: {}'.format(preds['data']['p_val']))
    print('')
% negative sentiment 10.0
Drift? Yes!
p-value: 0.0

% negative sentiment 90.0
Drift? Yes!
p-value: 0.0

Perturbed data:

[27]:
for w, probas in X_word.items():
    for p, v in probas.items():
        preds = cd.predict(v)
        print('Word: {} -- % perturbed: {}'.format(w, p))
        print('Drift? {}'.format(labels[preds['data']['is_drift']]))
        print('p-value: {}'.format(preds['data']['p_val']))
        print('')
Word: fantastic -- % perturbed: 1.0
Drift? No!
p-value: 0.07999999821186066

Word: fantastic -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

Word: good -- % perturbed: 1.0
Drift? No!
p-value: 0.7099999785423279

Word: good -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

Word: bad -- % perturbed: 1.0
Drift? No!
p-value: 0.12999999523162842

Word: bad -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

Word: horrible -- % perturbed: 1.0
Drift? No!
p-value: 0.33000001311302185

Word: horrible -- % perturbed: 5.0
Drift? Yes!
p-value: 0.0

Train embeddings from scratch

So far we used pre-trained embeddings from a BERT model. We can however also use embeddings from a model trained from scratch. First we define and train a simple classification model consisting of an embedding and LSTM layer in TensorFlow.

Load data and train model

[28]:
from tensorflow.keras.datasets import imdb, reuters
from tensorflow.keras.layers import Dense, Embedding, Input, LSTM
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.utils import to_categorical

INDEX_FROM = 3
NUM_WORDS = 10000


def print_sentence(tokenized_sentence: str, id2w: dict):
    print(' '.join(id2w[_] for _ in tokenized_sentence))
    print('')
    print(tokenized_sentence)


def mapping_word_id(data):
    w2id = data.get_word_index()
    w2id = {k: (v + INDEX_FROM) for k, v in w2id.items()}
    w2id["<PAD>"] = 0
    w2id["<START>"] = 1
    w2id["<UNK>"] = 2
    w2id["<UNUSED>"] = 3
    id2w = {v: k for k, v in w2id.items()}
    return w2id, id2w


def get_dataset(dataset: str = 'imdb', max_len: int = 100):
    if dataset == 'imdb':
        data = imdb
    elif dataset == 'reuters':
        data = reuters
    else:
        raise NotImplementedError

    w2id, id2w = mapping_word_id(data)

    (X_train, y_train), (X_test, y_test) = data.load_data(
        num_words=NUM_WORDS, index_from=INDEX_FROM)
    X_train = sequence.pad_sequences(X_train, maxlen=max_len)
    X_test = sequence.pad_sequences(X_test, maxlen=max_len)
    y_train, y_test = to_categorical(y_train), to_categorical(y_test)

    return (X_train, y_train), (X_test, y_test), (w2id, id2w)


def imdb_model(X: np.ndarray, num_words: int = 100, emb_dim: int = 128,
               lstm_dim: int = 128, output_dim: int = 2) -> tf.keras.Model:
    inputs = Input(shape=(X.shape[1:]), dtype=tf.float32)
    x = Embedding(num_words, emb_dim)(inputs)
    x = LSTM(lstm_dim, dropout=.5)(x)
    outputs = Dense(output_dim, activation=tf.nn.softmax)(x)
    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    model.compile(
        loss='categorical_crossentropy',
        optimizer='adam',
        metrics=['accuracy']
    )
    return model

Load and tokenize data:

[29]:
(X_train, y_train), (X_test, y_test), (word2token, token2word) = \
    get_dataset(dataset='imdb', max_len=max_len)
<string>:6: VisibleDeprecationWarning:

Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/imdb.py:159: VisibleDeprecationWarning:

Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/imdb.py:160: VisibleDeprecationWarning:

Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

Let’s check out an instance:

[30]:
print_sentence(X_train[0], token2word)
cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all

[1415   33    6   22   12  215   28   77   52    5   14  407   16   82
    2    8    4  107  117 5952   15  256    4    2    7 3766    5  723
   36   71   43  530  476   26  400  317   46    7    4    2 1029   13
  104   88    4  381   15  297   98   32 2071   56   26  141    6  194
 7486   18    4  226   22   21  134  476   26  480    5  144   30 5535
   18   51   36   28  224   92   25  104    4  226   65   16   38 1334
   88   12   16  283    5   16 4472  113  103   32   15   16 5345   19
  178   32]

Define and train a simple model:

[31]:
model = imdb_model(X=X_train, num_words=NUM_WORDS, emb_dim=256, lstm_dim=128, output_dim=2)
model.fit(X_train, y_train, batch_size=32, epochs=2,
          shuffle=True, validation_data=(X_test, y_test))
Epoch 1/2
782/782 [==============================] - 96s 121ms/step - loss: 0.5019 - accuracy: 0.7397 - val_loss: 0.3452 - val_accuracy: 0.8514
Epoch 2/2
782/782 [==============================] - 93s 118ms/step - loss: 0.2649 - accuracy: 0.8943 - val_loss: 0.3628 - val_accuracy: 0.8454
[31]:
<tensorflow.python.keras.callbacks.History at 0x7f6440301310>

Extract the embedding layer from the trained model and combine with UAE preprocessing step:

[32]:
embedding = tf.keras.Model(inputs=model.inputs, outputs=model.layers[1].output)
x_emb = embedding(X_train[:5])
print(x_emb.shape)
(5, 100, 256)
[33]:
tf.random.set_seed(0)

shape = tuple(x_emb.shape[1:])
uae = UAE(input_layer=embedding, shape=shape, enc_dim=enc_dim)

Again, create reference, H0 and perturbed datasets. Also test against the Reuters news topic classification dataset.

[34]:
X_ref, y_ref = random_sample(X_test, y_test, proba_zero=.5, n=n_sample)
X_h0, y_h0 = random_sample(X_test, y_test, proba_zero=.5, n=n_sample)
tokens = [word2token[w] for w in words]
X_word = {}
for i, t in enumerate(tokens):
    X_word[words[i]] = {}
    for p in perc_chg:
        X_word[words[i]][p] = inject_word(t, X_ref, p, padding='first')
[35]:
# load and tokenize Reuters dataset
(X_reut, y_reut), (w2t_reut, t2w_reut) = \
    get_dataset(dataset='reuters', max_len=max_len)[1:]

# sample random instances
idx = np.random.choice(X_reut.shape[0], n_sample, replace=False)
X_ood = X_reut[idx]
/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/reuters.py:148: VisibleDeprecationWarning:

Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/reuters.py:149: VisibleDeprecationWarning:

Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray

Initialize detector and detect drift

[36]:
from alibi_detect.cd.tensorflow import preprocess_drift

# define preprocessing function
preprocess_fn = partial(preprocess_drift, model=uae, batch_size=128)

# initialize detector
cd = KSDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn)

H0:

[37]:
preds_h0 = cd.predict(X_h0)
labels = ['No!', 'Yes!']
print('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))
print('p-value: {}'.format(preds_h0['data']['p_val']))
Drift? No!
p-value: [0.93558097 0.64755726 0.50035924 0.85929435 0.04281518 0.93558097
 0.9801618  0.50035924 0.8879386  0.43243074 0.5726548  0.6852314
 0.60991895 0.9134755  0.18111965 0.722555   0.5726548  0.21933001
 0.5360543  0.6852314  0.85929435 0.31356168 0.9801618  0.18111965
 0.34099194 0.722555   0.04841881 0.99365413 0.82795686 0.14833806
 0.1338343  0.9134755 ]

Perturbed data:

[38]:
for w, probas in X_word.items():
    for p, v in probas.items():
        preds = cd.predict(v)
        print('Word: {} -- % perturbed: {}'.format(w, p))
        print('Drift? {}'.format(labels[preds['data']['is_drift']]))
        print('p-value: {}'.format(preds['data']['p_val']))
        print('')
Word: fantastic -- % perturbed: 1.0
Drift? No!
p-value: [0.9882611  0.79439443 0.9999727  0.9882611  0.7590978  0.8879386
 0.996931   0.82795686 0.64755726 0.7590978  0.85929435 0.99870795
 0.93558097 0.82795686 0.99365413 0.996931   0.85929435 0.8879386
 0.85929435 0.9540582  0.96887016 0.9801618  0.50035924 0.9998709
 0.96887016 0.9801618  0.8879386  0.96887016 0.9540582  0.8879386
 0.9995433  0.722555  ]

Word: fantastic -- % perturbed: 5.0
Drift? Yes!
p-value: [8.87938619e-01 1.99518353e-01 6.47557259e-01 1.64079204e-01
 2.63380647e-01 1.81119651e-01 7.22554982e-01 1.96269080e-02
 3.50604125e-04 1.99518353e-01 1.08282514e-01 6.85231388e-01
 2.63380647e-01 1.33834302e-01 8.27956855e-01 1.99518353e-01
 3.77843790e-02 1.48931602e-02 4.65766221e-01 4.84188050e-02
 6.09918952e-01 5.36054313e-01 2.82894098e-03 2.92505771e-02
 5.00359237e-01 7.94394433e-01 5.72654784e-01 6.15514442e-02
 8.87938619e-01 4.00471032e-01 3.13561678e-01 3.40991944e-01]

Word: good -- % perturbed: 1.0
Drift? No!
p-value: [0.9882611  0.99365413 0.99365413 0.9998709  0.99870795 0.99365413
 0.996931   0.96887016 0.9134755  0.96887016 0.99365413 0.9801618
 0.9134755  0.9998709  0.93558097 0.99365413 0.9801618  0.96887016
 0.99365413 0.9540582  0.99365413 0.996931   0.93558097 0.9995433
 0.93558097 0.996931   0.99365413 0.99870795 0.9801618  0.9134755
 0.96887016 0.9540582 ]

Word: good -- % perturbed: 5.0
Drift? No!
p-value: [0.9540582  0.82795686 0.7590978  0.5726548  0.60991895 0.3699725
 0.9801618  0.85929435 0.5360543  0.60991895 0.9801618  0.64755726
 0.28769323 0.99870795 0.8879386  0.28769323 0.60991895 0.19951835
 0.8879386  0.21933001 0.28769323 0.5360543  0.2406036  0.7590978
 0.79439443 0.34099194 0.9134755  0.40047103 0.8879386  0.31356168
 0.82795686 0.2406036 ]

Word: bad -- % perturbed: 1.0
Drift? No!
p-value: [0.8879386  0.99870795 0.99365413 0.85929435 0.93558097 0.6852314
 0.82795686 0.9540582  0.93558097 0.9540582  0.93558097 0.7590978
 0.6852314  0.96887016 0.9134755  0.99365413 0.46576622 0.79439443
 0.85929435 0.9540582  0.93558097 0.8879386  0.50035924 0.9999727
 0.5726548  0.9134755  0.99870795 0.9540582  0.9882611  0.8879386
 0.9540582  0.9134755 ]

Word: bad -- % perturbed: 5.0
Drift? Yes!
p-value: [6.09918952e-01 1.99518353e-01 3.69972497e-01 4.00471032e-01
 1.81119651e-01 1.71140861e-02 8.59294355e-01 6.15514442e-02
 3.13561678e-01 1.64079204e-01 6.47557259e-01 3.69972497e-01
 2.63813617e-05 1.96269080e-02 1.20504074e-01 3.69972497e-01
 7.76214674e-02 3.32780443e-02 9.71045271e-02 3.69972497e-01
 5.46463318e-02 5.00359237e-01 4.93855441e-05 6.47557259e-01
 4.84188050e-02 3.69972497e-01 2.40603596e-01 3.89581337e-03
 4.00471032e-01 8.27956855e-01 5.36054313e-01 5.36054313e-01]

Word: horrible -- % perturbed: 1.0
Drift? No!
p-value: [0.9801618  0.50035924 0.9134755  0.7590978  0.8879386  0.60991895
 0.9540582  0.9134755  0.5726548  0.96887016 0.85929435 0.8879386
 0.2406036  0.64755726 0.8879386  0.79439443 0.5726548  0.9882611
 0.6852314  0.85929435 0.7590978  0.7590978  0.7590978  0.9134755
 0.7590978  0.93558097 0.7590978  0.82795686 0.996931   0.9134755
 0.9801618  0.8879386 ]

Word: horrible -- % perturbed: 5.0
Drift? Yes!
p-value: [7.22554982e-01 1.38413116e-05 5.46463318e-02 3.89581337e-03
 1.99518353e-01 4.21853358e-04 1.99518353e-01 1.81119651e-01
 5.37760343e-07 6.20218972e-03 1.64079204e-01 7.76214674e-02
 5.71402455e-15 7.42663324e-05 1.29345525e-02 9.69783217e-03
 1.12110768e-02 6.15514442e-02 2.40603596e-01 1.20504074e-01
 5.72654784e-01 2.40603596e-01 1.10792353e-04 1.96269080e-02
 5.32228360e-03 1.98871276e-04 1.72444014e-03 1.71140861e-02
 8.87938619e-01 9.71045271e-02 4.84188050e-02 8.69054198e-02]

The detector is not as sensitive as the Transformer-based K-S drift detector. The embeddings trained from scratch only trained on a small dataset and a simple model with cross-entropy loss function for 2 epochs. The pre-trained BERT model on the other hand captures semantics of the data better.

Sample from the Reuters dataset:

[39]:
preds_ood = cd.predict(X_ood)
labels = ['No!', 'Yes!']
print('Drift? {}'.format(labels[preds_ood['data']['is_drift']]))
print('p-value: {}'.format(preds_ood['data']['p_val']))
Drift? Yes!
p-value: [5.72654784e-01 7.26078229e-04 2.73716728e-15 3.49877549e-09
 1.29345525e-02 2.24637091e-02 4.95470906e-14 1.34916729e-04
 8.27956855e-01 4.00471032e-01 6.20218972e-03 1.97469308e-09
 6.15514442e-02 5.06567594e-04 5.46463318e-02 7.59097815e-01
 1.97830971e-07 4.56308130e-10 4.15714254e-08 4.32430744e-01
 8.36122004e-23 4.56308130e-10 1.12110768e-02 5.20541775e-30
 5.72654784e-01 9.15067773e-08 1.85489473e-08 6.85231388e-01
 1.54522304e-12 2.56591532e-02 2.40603596e-01 7.21312594e-03]