How can I reduce verbosity in the Julia package for xgboost (XGBoost)?
I have set print_every_n => Int(0) and verbosity=0 but I still get the info for every boosting step. I use the package version 2.2.1 in Julia v1.8.2. There is a newer package version, and I can not upgrade at the moment. However I doubt that this is an issue of the package version.
The following is an example. It always prints train-rmse for each iteration (of the 50 in this example). How can I supress this?
Thank you!
using XGBoost
nobs = Int(10000)
num_feat = Int(20)
x_train = rand(T, nobs, num_feat)
y_train = rand(T, size(x_train, 1))
params_xgb = Dict(
:max_depth => Int(2),
:eta => 0.01,
:objective => "reg:squarederror",
:print_every_n => Int(0)
)
dtrain = DMatrix(x_train, y_train .- 1)
#time m_xgb = xgboost(dtrain, num_round=50, verbosity=0, param = params_xgb);
pred_xgb = XGBoost.predict(m_xgb, x_train);
size(pred_xgb)
xgboost(dtrain, num_round=50, verbosity=0, param = params_xgb, watchlist=(;));
see: https://github.com/dmlc/XGBoost.jl/issues/151
The above answer is correct. I've used watchlist=(;) with success. Initially, I had similar confusion until realized many code blocks on the internet are from an earlier version of xgboost.jl. The key words 'verbosity' and 'param' are no longer part of the package and ignored. I believe the model in the code block above is created with the default parameter values. My practice is to pass in the xgboost hyperparameter keywords individually. However, if one wishes to duplicate the param= functionality the params_xgb dictionary could be converted to a named tuple, with for example kw=(;params_xgb...) , and can then be passed in to xgboost with ; kw... and the end of the parameter list.
Related
I have a key, but I get errors if I try to run the example in R to test:
library(translateR) # Loading the package
data(enron) # Loading the dataset to be translated
tc <- translateR::translate(dataset = enron,
content.field = 'email',
microsoft.api.key = '',
source.lang = 'en',
target.lang = 'de')
Rather than an additional column with the same text in German I get:
The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
Break on THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC() to debug.
Warning message:
In mclapply(to.translate, function(x) microsoftTranslate(x, microsoft.api.key, :
scheduled cores 1, 2 did not deliver results, all values of the jobs will be affected
Would be amazing if somebody could help!
I am following an Agents.jl tutorial (https://juliadynamics.github.io/Agents.jl/stable/examples/schelling/) and get the following error when executing this piece of code. Any ideas why?
using Agents
using InteractiveDynamics
using CairoMakie
groupColor(a) = a.group == 1 ? :blue : :green
groupMarker(a) = a.group == 1 ? :circle : :rect
fig, _ = abm_plot(model, ac = groupColor, am = groupMarker, as = 10)
#Note that abm_plot is a function from InteractiveDynamics.jl which uses makie
#and model is an AgentBasedModel obj created from Agents.jl
#Out >
No backend available (GLMakie, CairoMakie, WGLMakie)!
Maybe you imported GLMakie but it didn't build correctly.
In that case, try `]build GLMakie` and watch out for any warnings.
If that's not the case, make sure to explicitely import any of the mentioned backends.
Had to update InteractiveDynamics package from 0.14.6 to 0.15.1. The answer to this problem can be found in the following thread.
https://discourse.julialang.org/t/no-backend-available-glmakie-cairomakie-wglmakie/62984/6
while using regr.ranger I'm getting an error message that says importance.mode must be one of "impurity" etc.
While using regr.rfsrc it says I should specify 'importance' to one of 'TRUE' etc.
I just want to understand at what stage should I assign the value to 'importance'.
I get an error if I do it while creating the learner.
> lrnr_ranger = mlr_learners$get(key = "regr.ranger",importance="impurity")
Error in initialize(...) : unused argument (importance = "impurity")
or
> lrnr_ranger = mlr_learners$get(key = "regr.ranger",importance.mode="impurity")
Error in initialize(...) : unused argument (importance.mode = "impurity")
or should I try to set it using the param_set:
> lrnr_ranger$param_set$add(p = list("importance.mode","impurity"))
Error in .__ParamSet__add(self = self, private = private, super = super, :
Assertion on 'p' failed: Must inherit from class 'Param'/'ParamSet', but has class 'list'.
Any clue would be super helpful.
I'm not really reporting a problem but asking how to do something. (Hence, I believe no need to create a reprex!)
I wish this was addressed in the mlr3 book or some documentation but it is not.
This is explained on the learners page of the mlr3 book, in particular at the end:
lrn_ranger = lrn("regr.ranger", importance = "impurity")
I'm following this tutorial that codes a sentiment analysis classifier using BERT with the huggingface library and I'm having a very odd behavior. When trying the BERT model with a sample text I get a string instead of the hidden state. This is the code I'm using:
import transformers
from transformers import BertModel, BertTokenizer
print(transformers.__version__)
PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
PATH_OF_CACHE = "/home/mwon/data-mwon/paperChega/src_classificador/data/hugingface"
tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME,cache_dir = PATH_OF_CACHE)
sample_txt = 'When was I last outside? I am stuck at home for 2 weeks.'
encoding_sample = tokenizer.encode_plus(
sample_txt,
max_length=32,
add_special_tokens=True, # Add '[CLS]' and '[SEP]'
return_token_type_ids=False,
padding=True,
truncation = True,
return_attention_mask=True,
return_tensors='pt', # Return PyTorch tensors
)
bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME,cache_dir = PATH_OF_CACHE)
last_hidden_state, pooled_output = bert_model(
encoding_sample['input_ids'],
encoding_sample['attention_mask']
)
print([last_hidden_state,pooled_output])
that outputs:
4.0.0
['last_hidden_state', 'pooler_output']
While the answer from Aakash provides a solution to the problem, it does not explain the issue. Since one of the 3.X releases of the transformers library, the models do not return tuples anymore but specific output objects:
o = bert_model(
encoding_sample['input_ids'],
encoding_sample['attention_mask']
)
print(type(o))
print(o.keys())
Output:
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
odict_keys(['last_hidden_state', 'pooler_output'])
You can return to the previous behavior by adding return_dict=False to get a tuple:
o = bert_model(
encoding_sample['input_ids'],
encoding_sample['attention_mask'],
return_dict=False
)
print(type(o))
Output:
<class 'tuple'>
I do not recommend that, because it is now unambiguous to select a specific part of the output without turning to the documentation as shown in the example below:
o = bert_model(encoding_sample['input_ids'], encoding_sample['attention_mask'], return_dict=False, output_attentions=True, output_hidden_states=True)
print('I am a tuple with {} elements. You do not know what each element presents without checking the documentation'.format(len(o)))
o = bert_model(encoding_sample['input_ids'], encoding_sample['attention_mask'], output_attentions=True, output_hidden_states=True)
print('I am a cool object and you can acces my elements with o.last_hidden_state, o["last_hidden_state"] or even o[0]. My keys are; {} '.format(o.keys()))
Output:
I am a tuple with 4 elements. You do not know what each element presents without checking the documentation
I am a cool object and you can acces my elements with o.last_hidden_state, o["last_hidden_state"] or even o[0]. My keys are; odict_keys(['last_hidden_state', 'pooler_output', 'hidden_states', 'attentions'])
I faced the same issue while learning how to implement Bert. I noticed that using
last_hidden_state, pooled_output = bert_model(encoding_sample['input_ids'], encoding_sample['attention_mask'])
is the issue. Use:
outputs = bert_model(encoding_sample['input_ids'], encoding_sample['attention_mask'])
and extract the last_hidden state using
output[0]
You can refer to the documentation here which tells you what is returned by the BertModel
I am kind of new on Chainer and I have been struggling with a weird situation recently.
I have a Chain to compute a CNN which I feed with a labeledDataSet.
But no results appears when I use the extensions. When I display the observation value it is empty. But the loss is indeed calculated and the parameters updated (at least they change) so I don't know where is the connection problem.
def convert(batch, device):
return chainer.dataset.convert.concat_examples(batch, device, padding=0)
def print_obs(t):
print("trainer.observation", trainer.observation)
print("updater.loss", updater.loss_func)
print("conv1", model.predictor.conv1.W[0][0])
print("conv20", model.predictor.conv20.W[0][0])
model.predictor.train = True
model.predictor.finetune = False ####or True ??
cuda.get_device(0).use()
model.to_gpu()
optimizer = optimizers.MomentumSGD(lr=learning_rate, momentum=momentum)
optimizer.use_cleargrads()
optimizer.setup(model)
optimizer.add_hook(chainer.optimizer.WeightDecay(weight_decay))
train, test = imageNet_data.train_val_test()
train_iter = iterators.SerialIterator(train, batch_size)
test_iter = iterators.SerialIterator(test, batch_size, repeat=False,shuffle=False)
with chainer.using_config('debug', True):
# Set up a trainer
updater = training.StandardUpdater(train_iter, optimizer, loss_func=model, converter=convert)
trainer = training.Trainer(updater, (10, 'epoch'), out="./backup/result")
trainer.extend(print_obs, trigger=(3, 'iteration'))
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.run()
Maybe this is something is miss completely and which is quite obvious.. Thank you for all remarks that would help me a lot.
Chainer4.1, Ubuntu16
If you are using your own Link with the Trainer, you need to report metrics using chainer.report by your own.
See https://docs.chainer.org/en/stable/guides/report.html for instructions.
You can see some examples in Chainer repository:
https://github.com/chainer/chainer/blob/v4.1.0/chainer/links/model/classifier.py#L116
https://github.com/chainer/chainer/blob/v4.1.0/examples/imagenet/alex.py#L40