I'm trying to get sentence vectors from hidden states in a BERT model. Looking at the huggingface BertModel instructions here, which say:
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = BertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
So first note, as it is on the website, this does /not/ run. You get:
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'BertTokenizer' object is not callable
But it looks like a minor change fixes it, in that you don't call the tokenizer directly, but ask it to encode the input:
encoded_input = tokenizer.encode(text, return_tensors="pt")
output = model(encoded_input)
OK, that aside, the tensors I get, however, have a different shape than I expected:
>>> output[0].shape
torch.Size([1,11,768])
This is a lot of layers. Which is the correct layer to use for sentence embeddings? [0]? [-1]? Averaging several? I have the goal of being able to do cosine similarity with these, so I need a proper 1xN vector rather than an NxK tensor.
I see that the popular bert-as-a-service project appears to use [0]
Is this correct? Is there documentation for what each of the layers are?
While the existing answer of Jindrich is generally correct, it does not address the question entirely. The OP asked which layer he should use to calculate the cosine similarity between sentence embeddings and the short answer to this question is none. A metric like cosine similarity requires that the dimensions of the vector contribute equally and meaningfully, but this is not the case for BERT weights released by the original authors. Jacob Devlin
(one of the authors of the BERT paper) wrote:
I'm not sure what these vectors are, since BERT does not generate meaningful sentence vectors. It seems that this is doing average pooling over the word tokens to get a sentence vector, but we never suggested that this will generate meaningful sentence representations. And even if they are decent representations when fed into a DNN trained for a downstream task, it doesn't mean that they will be meaningful in terms of cosine distance. (Since cosine distance is a linear space where all dimensions are weighted equally).
However, that does not mean you can not use BERT for such a task. It just means that you can not use the pre-trained weights out-of-the-box. You can either train a classifier on top of BERT which learns which sentences are similar (using the [CLS] token) or you can use sentence-transformers which can be used in an unsupervised scenario because they were trained to produce meaningful sentence representations.
I don't think there is single authoritative documentation saying what to use and when. You need to experiment and measure what is best for your task. Recent observations about BERT are nicely summarized in this paper: https://arxiv.org/pdf/2002.12327.pdf.
I think the rule of thumb is:
Use the last layer if you are going to fine-tune the model for your specific task. And finetune whenever you can, several hundred or even dozens of training examples are enough.
Use some of the middle layers (7-th or 8-th) if you cannot finetune the model. The intuition behind that is that the layers first develop a more and more abstract and general representation of the input. At some point, the representation starts to be more target to the pre-training task.
Bert-as-services uses the last layer by default (but it is configurable). Here, it would be [:, -1]. However, it always returns a list of vectors for all input tokens. The vector corresponding to the first special (so-called [CLS]) token is considered to be the sentence embedding. This where the [0] comes from in the snipper you refer to.
As mentioned in other answers, BERT was not meant to produce sentence level embeddings. Now, let's work on the how we can leverage power of BERT for computing context-sensitive sentence level embeddings.
BERT does carry the context at word level, here is an example:
This is a wooden stick.
Stick to your work.
Above two sentences carry the word 'stick', BERT does a good job in computing embeddings of stick as per sentence(or say, context).
Now, let's move to one another example:
--What is your age?
--How old are you?
Above two sentences are contextually very similar, so, we need a model that can accept a sentence or text chunk or paragraph and produce right embeddings collectively. Here is how it can be achieved.
Method 1:
Use pre-trained sentence_transformers, here is link to huggingface hub.
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(r"sentence-transformers/paraphrase-MiniLM-L6-v2")
embd_a = model.encode("What is your age?")
embd_b = model.encode("How old are you?")
sim_score = cos_sim(embd_a, embd_b)
print(sim_score)
output: tensor([[0.8648]])
Now, there may be a question on how can we train our on sentence_transformer, specific to a domain. Here we go,
Supervised approach:
A common challenge for Data Scientist or MLEngineers is to get rightly annotated data, mostly it is hard to get it in good volume, but say, if you have it here is how we can train our on sentence_transformer (don't worry, there is an unsupervised approach too).
model = SentenceTransformer('distilbert-base-nli-mean-tokens')
train_examples = [InputExample(texts=['My first sentence', 'My second sentence'], label=0.8),
InputExample(texts=['Another pair', 'Unrelated sentence'], label=0.3)]
train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=16)
train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=16)
train_loss = losses.CosineSimilarityLoss(model)
#Tune the model
model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=1, warmup_steps=100)
More details here.
Tip: If you have a set of sentences that are similar to each other, say, you have a CSV, where column A and B contains sentences similar to each other(I mean each row will have a pair of sentences which are similar to each other), just load the csv and assign random values between 0.85 to 0.95 as similarity score and proceed.
Unsupervised approach
Say you don't have a huge set of annotated data, but you want to train a domain specific sentence_transformer, here is how we do it. Even for unsupervised training, data will be required, i.e. list of sentences/paragraphs, but need not to be annotated. Say, you don't have any data at all, still there is a work round (please visit last part of the answer).
Multiple approaches are available for unsupervised training, listing two of the most prominent ones. To see list of all available approaches, please visit here.
TSDAE link to research paper.
from sentence_transformers import SentenceTransformer, LoggingHandler
from sentence_transformers import models, util, datasets, evaluation, losses
from torch.utils.data import DataLoader
# Define your sentence transformer model using CLS pooling
model_name = 'bert-base-uncased'
word_embedding_model = models.Transformer(model_name)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), 'cls')
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
# Define a list with sentences (1k - 100k sentences)
train_sentences = ["Your set of sentences",
"Model will automatically add the noise",
"And re-construct it",
"You should provide at least 1k sentences"]
# Create the special denoising dataset that adds noise on-the-fly
train_dataset = datasets.DenoisingAutoEncoderDataset(train_sentences)
# DataLoader to batch your data
train_dataloader = DataLoader(train_dataset, batch_size=8, shuffle=True)
# Use the denoising auto-encoder loss
train_loss = losses.DenoisingAutoEncoderLoss(model, decoder_name_or_path=model_name, tie_encoder_decoder=True)
# Call the fit method
model.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=1,
weight_decay=0,
scheduler='constantlr',
optimizer_params={'lr': 3e-5},
show_progress_bar=True
)
model.save('output/tsdae-model')
SimCSE link to research paper
from sentence_transformers import SentenceTransformer, InputExample
from sentence_transformers import models, losses
from torch.utils.data import DataLoader
# Define your sentence transformer model using CLS pooling
model_name = 'distilroberta-base'
word_embedding_model = models.Transformer(model_name, max_seq_length=32)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
# Define a list with sentences (1k - 100k sentences)
train_sentences = ["Your set of sentences",
"Model will automatically add the noise",
"And re-construct it",
"You should provide at least 1k sentences"]
# Convert train sentences to sentence pairs
train_data = [InputExample(texts=[s, s]) for s in train_sentences]
# DataLoader to batch your data
train_dataloader = DataLoader(train_data, batch_size=128, shuffle=True)
# Use the denoising auto-encoder loss
train_loss = losses.MultipleNegativesRankingLoss(model)
# Call the fit method
model.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=1,
show_progress_bar=True
)
model.save('output/simcse-model')
Tip: If you carefully observer, major difference is in the loss function used. To see a list of all the loss function applicable to such training scenarios, visit here. Also, with all the experiments I did, I found that TSDAE is more useful, when you want decent precision and good recall. However, SimCSE can be used when you want very high precision and low recall.
Now, if you don't have sufficient data to fine tune the model, but you find a BERT model trained on your domain, you can directly leverage that by adding pooling and dense layers. Please do research on what is 'pooling', to have better understanding on what you are doing.
from sentence_transformers import SentenceTransformer, models
from torch import nn
word_embedding_model = models.Transformer('bert-base-uncased', max_seq_length=256)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
dense_model = models.Dense(in_features=pooling_model.get_sentence_embedding_dimension(), out_features=256, activation_function=nn.Tanh())
model = SentenceTransformer(modules=[word_embedding_model, pooling_model, dense_model])
Tip: With above approach, if you start getting extreme high cosine score, it is an alarm to do negative testing. Sometime, simply adding pooling layers may not help, you must take few examples and check similarity scores for the inputs that are not similar (it is possible that even for dissimilar sentences, this may show good similarity, and that is the time you should stop and try to collect some data and do unsupervised training)
People who are interested in going deeper, here is a list of topics that may help you.
Pooling
Siamese Networks
Contrastive Loss
:) :)
Related
At my University's research group we have been pre-training a RoBERTa model for Portuguese and also a domain-specific one, also based on RoBERTa. We have been conducting a series of benchmarks using huggingface's transformers library, and the RoBERTa models are performing better than the existing Portuguese BERT model for almost all datasets and tasks.
One of the tasks we're focusing on is NER, and since AllenNLP supports a CRF-based NER model, we were looking forward to seeing if we would get even greater improvements using these new RoBERTa models combined with AllenNLP's crf_tagger. We used the same jsonnet config we were using for BERT, only switching to RoBERTa, and ran a grid search on some hyperparameters to look for the best model. We tested hyperparameters such as weight decay and learning rate (for huggingface_adamw optimizer) and dropout (for crf_tagger), using 3 different seeds. To our surprise, the RoBERTa models weren't getting better results than the existing BERT model, which contradicted the experiments using transformers. It wasn't even a tie, the BERT model was much better (90.43% for the best BERT x 89.27% for the best RoBERTa).
This made us suspicious that AllenNLP could be somehow biased towards BERT, then we decided to run an English-specific standard benchmark (CoNLL 2003) for NER using transformers and AllenNLP, and the results we got enforced this suspicion. For AllenNLP, we ran a grid search keeping the exact jsonnet config, changing only the learning rate (from 8e-6 to 7e-5), the learning rate scheduler (slanted_triangular and linear_with_warmup with 10% and 3% of the steps with warmup) and the model, of course (bert-base-cased and roberta-base). The results we got for AllenNLP were surprising: absolutely all models trained with bert-base-cased were better than all roberta-base models (best BERT was 91.65% on the test set and best RoBERTa was 90.63%).
For transformers, we did almost the same thing, except we didn't change the learning rate scheduler there, we kept the default one, which is linear with warmup, using 10% warmup ratio. We tested the same learning rates, and also applied 3 different seeds. The results we got for transformers were exactly the opposite: all roberta-base models were better than all bert-base-cased models (best RoBERTa was 92.46% on the test set and best BERT was 91.58%).
Is there something in AllenNLP framework that could be making these trained NER models biased towards BERT, and underperforming for RoBERTa? Where could we start looking for possible issues? Doesn't look like a hyperparameter issue, since we tested so many combinations with grid search so far.
Thanks!
If model-biased behavior does exist, I'd expect it to be somewhere in the implementations of the Transformer-related modules, viz. PretrainedTransformerIndexer, PretrainedTransformerTokenizer, PretrainedTransformerEmbedder, etc.
It may be worth checking whether RoBERTa's special tokens (i.e., <s>, </s>, <pad>, <unk>, and <mask>) are being used. My understanding is that AllenNLP attempts to infer these, but if this inference process failed, then it's possible that e.g. the tokenizer would be preparing sequences with another model's special tokens, e.g. [CLS] instead of <s>, etc.
I think I've figured this out. This behavior is likely caused by AllenNLP's default implementation of tokenization: when a pre-existing tokenization with paired tags is provided (as I assume it is since you are working with NER datasets where tags must be paired with tokens), PretrainedTransformerTokenizer.intra_word_tokenize is used, and this tokenization function does not add a leading space to tokens, causing suboptimal wordpiece tokenization.
Recall that the RoBERTa tokenizer uses byte-pair encoding, which uses special characters (Ġ in some implementations) to indicate the initial wordpiece of whitespace-separated tokens, while BERT uses ## to indicate non-initial wordpieces of whitespace-separated tokens. Observe:
>>> from transformers import BertTokenizer, RobertaTokenizer
>>> rt = RobertaTokenizer.from_pretrained('roberta-base')
>>> bt = BertTokenizer.from_pretrained('bert-base-cased')
>>> bt.tokenize('modern artistry')
['modern', 'artist', '##ry']
>>> rt.tokenize('modern artistry')
['modern', 'Ġart', 'istry']
RoBERTa does have the option add_prefix_space which adds a space to the beginning-of-sequence token, but this is False by default, at least on roberta-base.
>>> rt.add_prefix_space
False
>>> rt.add_prefix_space = True
>>> rt.tokenize('modern artistry')
['Ġmodern', 'Ġart', 'istry']
Now, for AllenNLP: I expect that you used the PretrainedTransformerMismatchedEmbedder and PretrainedTransformerMismatchedIndexer setup, since you're doing NER. The indexer uses the intra_word_tokenize function of PretrainedTransformerTokenizer, and a quick look at its implementation reveals that what it is doing is invoking the tokenizer for each individual token.
Why is this a problem? Well, this works fine if you're using WordPiece tokenization (like with BERT) since whitespace does not need to be present in the tokenizer's input for good subword tokenization to occur. However, BPE tokenization does require whitespace to be in the input string, and if we're calling the tokenizer on tokens without whitespace in them, then the BPE tokenizer no longer knows how to distinguish which subwords are token-initial! Consider:
# From before
>>> rt.tokenize('modern artistry')
['modern', 'Ġart', 'istry']
# The way AllenNLP does it. Bad, no initial "Ġ" on "art"!
>>> [wp for token in ['modern', 'artistry'] for wp in t.tokenize(token)]
['modern', 'art', 'istry']
# This is equivalent to tokenizing a whole string with no space:
>>> t.tokenize('modernartistry')
['modern', 'art', 'istry']
This information about token boundaries is potentially meaningful. Consider two strings ax island and axis land which have different meanings in English. If you tokenize it the way AllenNLP does, the input IDs for the wordpieces will be substantially different (!):
# Intended
>>> t.tokenize('axis land')
['axis', 'Ġland']
>>> t.tokenize('ax island')
['ax', 'Ġisland']
# What AllenNLP gives you
>>> [wp for token in ['axis', 'land'] for wp in t.tokenize(token)]
['axis', 'land']
>>> [wp for token in ['ax', 'island'] for wp in t.tokenize(token)]
['ax', 'is', 'land']
So, to mitigate this, you would need to modify intra_word_tokenize somehow to bring the wordpieces more in line with what you'd expect. I'm not positive this is exactly what's causing the performance issues you note, but I'm pretty sure this tokenization issue ought to be happening for you, and if it is, I would expect performance degradations due to the suboptimal wordpiece tokenization. A cheap solution would be to flip add_prefix_space on, but there may be other problems that that could subtly cause--I haven't considered it yet.
I am seeing someone other's BERT model, in which the vocab.txt's size is 22110, but the vocab_size parameter's value is 21128 in bert_config.json.
I understand that these two numbers must be exactly the same. Is that right?
If it is really BERT that uses WordPiece tokenizer, then yes. Different lengths of vocabulary and vocab_size in the config would mean that there are either embeddings that can never be used or that there are vocabulary items without any embeddings.
In this case, you will see no error message because the model and the tokenizer are loaded separately. The embedding table of BERT has 8 embeddings that are no "reachable".
Note, however, that the model may use some very non-standard tokenizer that saves the vocabulary in such a way, it is 8 items shorter (although it is quite unlikely).
In Tensorflow's tutorial of RNN: https://www.tensorflow.org/tutorials/recurrent
. It mentions two parameters: batch size and time steps. I am confused by the concepts. In my opinion, RNN introduces batch is because the fact that the to-train sequence can be very long such that backpropagation cannot compute that long(exploding/vanishing gradients). So we divide the long to-train sequence into shorter sequences, each of which is a mini-batch and whose size is called "batch size". Am I right here?
Regarding time steps, RNN consists of only a cell (LSTM or GRU cell, or other cell) and this cell is sequential. We can understand the sequential concept by unrolling it. But unrolling a sequential cell is a concept, not real which means we do not implement it in unroll way. Suppose the to-train sequence is a text corpus. Then we feed one word each time to the RNN cell and then update the weights. So why do we have time steps here? Combining my understanding of the above "batch size", I am even more confused. Do we feed the cell one word or multiple words (batch size)?
Batch size pertains to the amount of training samples to consider at a time for updating your network weights. So, in a feedforward network, let's say you want to update your network weights based on computing your gradients from one word at a time, your batch_size = 1.
As the gradients are computed from a single sample, this is computationally very cheap. On the other hand, it is also very erratic training.
To understand what happen during the training of such a feedforward network,
I'll refer you to this very nice visual example of single_batch versus mini_batch to single_sample training.
However, you want to understand what happens with your num_steps variable. This is not the same as your batch_size. As you might have noticed, so far I have referred to feedforward networks. In a feedforward network, the output is determined from the network inputs and the input-output relation is mapped by the learned network relations:
hidden_activations(t) = f(input(t))
output(t) = g(hidden_activations(t)) = g(f(input(t)))
After a training pass of size batch_size, the gradient of your loss function with respect to each of the network parameters is computed and your weights updated.
In a recurrent neural network (RNN), however, your network functions a tad differently:
hidden_activations(t) = f(input(t), hidden_activations(t-1))
output(t) = g(hidden_activations(t)) = g(f(input(t), hidden_activations(t-1)))
=g(f(input(t), f(input(t-1), hidden_activations(t-2)))) = g(f(inp(t), f(inp(t-1), ... , f(inp(t=0), hidden_initial_state))))
As you might have surmised from the naming sense, the network retains a memory of its previous state, and the neuron activations are now also dependent on the previous network state and by extension on all states the network ever found itself to be in. Most RNNs employ a forgetfulness factor in order to attach more importance to more recent network states, but that is besides the point of your question.
Then, as you might surmise that it is computationally very, very expensive to calculate the gradients of the loss function with respect to network parameters if you have to consider backpropagation through all states since the creation of your network, there is a neat little trick to speed up your computation: approximate your gradients with a subset of historical network states num_steps.
If this conceptual discussion was not clear enough, you can also take a look at a more mathematical description of the above.
I found this diagram which helped me visualize the data structure.
From the image, 'batch size' is the number of examples of a sequence you want to train your RNN with for that batch. 'Values per timestep' are your inputs.' (in my case, my RNN takes 6 inputs) and finally, your time steps are the 'length', so to speak, of the sequence you're training
I'm also learning about recurrent neural nets and how to prepare batches for one of my projects (and stumbled upon this thread trying to figure it out).
Batching for feedforward and recurrent nets are slightly different and when looking at different forums, terminology for both gets thrown around and it gets really confusing, so visualizing it is extremely helpful.
Hope this helps.
RNN's "batch size" is to speed up computation (as there're multiple lanes in parallel computation units); it's not mini-batch for backpropagation. An easy way to prove this is to play with different batch size values, an RNN cell with batch size=4 might be roughly 4 times faster than that of batch size=1 and their loss are usually very close.
As to RNN's "time steps", let's look into the following code snippets from rnn.py. static_rnn() calls the cell for each input_ at a time and BasicRNNCell::call() implements its forward part logic. In a text prediction case, say batch size=8, we can think input_ here is 8 words from different sentences of in a big text corpus, not 8 consecutive words in a sentence.
In my experience, we decide the value of time steps based on how deep we would like to model in "time" or "sequential dependency". Again, to predict next word in a text corpus with BasicRNNCell, small time steps might work. A large time step size, on the other hand, might suffer gradient exploding problem.
def static_rnn(cell,
inputs,
initial_state=None,
dtype=None,
sequence_length=None,
scope=None):
"""Creates a recurrent neural network specified by RNNCell `cell`.
The simplest form of RNN network generated is:
state = cell.zero_state(...)
outputs = []
for input_ in inputs:
output, state = cell(input_, state)
outputs.append(output)
return (outputs, state)
"""
class BasicRNNCell(_LayerRNNCell):
def call(self, inputs, state):
"""Most basic RNN: output = new_state =
act(W * input + U * state + B).
"""
gate_inputs = math_ops.matmul(
array_ops.concat([inputs, state], 1), self._kernel)
gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)
output = self._activation(gate_inputs)
return output, output
To visualize how these two parameters are related to the data set and weights, Erik Hallström's post is worth reading. From this diagram and above code snippets, it's obviously that RNN's "batch size" will no affect weights (wa, wb, and b) but "time steps" does. So, one could decide RNN's "time steps" based on their problem and network model and RNN's "batch size" based on computation platform and data set.
I am working in a project where my task deals with speech/audio/voice comparison. This project is used for judging the winner in the competitions(mimicry). Practically I need to capture the user's speech/voice and compare it with the original audio file and return a percentage match. I need to develop this in R-language.
I had already tried voice related packages in R (tuneR, audio, seewave) but in my search, I am not able to get the comparison related information.
I need some assistance from you guys that where, I can find the information related to my work, which is the best way to handle this type of problems and if there, what are the prerequisites for processing these type of audio related work.
Basically, the best features to be used for speech/voice comparison are the MFCC.
There are some softwares that can be used to extract these coefficients: Praat website
You can also try to find a lib to extract these coefficients.
[Edit: I've found in tuneR documentation that it has a function to extract MFCC - search for the function melfcc()]
After you've extracted these features, you can use Machine Learning (SVM, RandomForests or something like that) to develop a classifier.
I have a seminar that I've presented about Speaker Recognition Systems, take a look at it, it may be helpful. (Seminar)
If you have time and interest, you could algo read:
Authors: Kinnunen, T., & Li, H. (2010)
Paper: an overview of text-independent speaker recognition: From features to supervectors
After you get a feature vector for each audio sample (with MFCC and/or other features), then you'll need to compare pairs of feature vectors (Features from A versus Features from B):
You could try to use the Absolute Difference between these feature vectors:
abs(feature vector from A - feature vector from B)
The result of the operation above is a feature vector where every element is >=0 and it has the same size of the A (or B) feature vector.
You could also test the element-wise multiplication between A and B features:
(A1*B1, A2*B2, ... , An*Bn)
Then you need to label each feature vector
(1 if person A == person B and 0 if person A != person B).
Usually the absolute difference performs better than the multiplication feature vector, but you can append both vectors and test the performance of the classifier using both the abs diff and the multiplication features at the same time.
I am using Latent semantic analysis for text similarity. I have 2 questions.
How to select K value for dimention reduction?
I read alot every where that LSI work for similary meaning words for example car and automobile. How is it possible??? What is the magic step I am missing here?
The typical choice for k is 300. Ideally, you set k based on an evaluation metric that uses the reduced vectors. For example, if you're clustering documents, you could select the k that maximizes the clustering solution score. If you don't have a benchmark to measure against, then I would set k based on how big your data set is. If you only have 100 documents, then you wouldn't expect to need several hundred latent factors to represent them. Likewise, if you have a million documents, then 300 may be too small. However, in my experience the resulting vectors are fairly robust to large changes in k, provided that k is not too small (i.e., k = 300 does about as well as k = 1000).
You might be confusing LSI with Latent Semantic Analysis (LSA). They're very related techniques, with the difference being that LSI operates on documents, and LSA operates on words. Both approaches use the same input (a term x document matrix). There are several good open source LSA implementations if you would like to try them. The LSA wikipedia page has a comprehensive list.
try a couple of different values from [1..n] and see what works for whatever task you are trying to accomplish
Make a word-word correlation matrix [ i.e. cell(i,j) holds the # of docs where (i,j) co-occur ] and use something like PCA on it