Dimensional error in the non-linguistic dataset as input to LSTM based Encoder-decoder model using attention - multidimensional-array

I'm trying to implement attention -LSTM based encoder decoder model for multi-class classification. The dataset is non-linguistic in nature.
Characteristics of my dataset:
x_train.shape = (930,5)
y_train.shape = (405,5)
x_test.shape = (930,3)
y_test.shape = (405,3)
x_train.head()
val1 val2 val3 val4 val5
10000 00101 01000 10000 00110
10000 00101 01000 10000 00110
00010 01001 01001 01000 00110
00100 01000 01001 01000 00111
00101 01000 01001 01000 00110
Then I converted the dataframe values into array:
x_tr = np.array(x_train)
array([['10000', '00101', '01000', '10000', '00110'],
['10000', '00101', '01000', '10000', '00110'],
['00010', '01001', '01001', '01000', '00110'],
...,
['01001', '00101', '01001', '01001', '00110'],
['00101', '01000', '01001', '01000', '00110'],
['00100', '01000', '01001', '01000', '00111']], dtype=object)
Then I reshaped my array into 3D, so that it can be given as an input to the LSTM based enc-dec model:
X_TR = np.reshape(x_tr, (930, 5, -1))
Y_TR = np.reshape(y_tr, (930, 3, -1))
X_TE = np.reshape(x_te, (405, 5, -1))
Y_TE = np.reshape(y_te, (405, 3, -1))
print(X_TR.shape, x_tr.shape)
(930, 5, 1) (930, 5)
Now I define the simple model, but get the error pasted below the code
def main():
time_steps, input_dim, output_dim = 5, 5, 3
model_input = Input(shape=(time_steps, input_dim))
x = LSTM(64, return_sequences=True)(model_input)
x = Attention(32)(x)
x = Dense(1)(x)
model = Model(model_input, x)
model.compile(loss='mae', optimizer='adam')
print(model.summary())
model.fit(X_TR, Y_TR, epochs=10)
# test save/reload model.
pred1 = model.predict(X_TE)
np.testing.assert_almost_equal(pred1, Y_TE)
print('Success.')
if __name__ == '__main__':
main()
The output is as follows:
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 5, 5)] 0
__________________________________________________________________________________________________
lstm (LSTM) (None, 5, 64) 17920 input_1[0][0]
__________________________________________________________________________________________________
last_hidden_state (Lambda) (None, 64) 0 lstm[0][0]
__________________________________________________________________________________________________
attention_score_vec (Dense) (None, 5, 64) 4096 lstm[0][0]
__________________________________________________________________________________________________
attention_score (Dot) (None, 5) 0 last_hidden_state[0][0]
attention_score_vec[0][0]
__________________________________________________________________________________________________
attention_weight (Activation) (None, 5) 0 attention_score[0][0]
__________________________________________________________________________________________________
context_vector (Dot) (None, 64) 0 lstm[0][0]
attention_weight[0][0]
__________________________________________________________________________________________________
attention_output (Concatenate) (None, 128) 0 context_vector[0][0]
last_hidden_state[0][0]
__________________________________________________________________________________________________
attention_vector (Dense) (None, 128) 16384 attention_output[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 1) 129 attention_vector[0][0]
==================================================================================================
Total params: 38,529
Trainable params: 38,529
Non-trainable params: 0
__________________________________________________________________________________________________
None
Epoch 1/10
WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x000002813E5532F0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 'arguments' object has no attribute 'posonlyargs'
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x000002813E5532F0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: 'arguments' object has no attribute 'posonlyargs'
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
WARNING:tensorflow:Model was constructed with shape (None, 5, 5) for input KerasTensor(type_spec=TensorSpec(shape=(None, 5, 5), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'"), but it was called on an input with incompatible shape (None, 5, 1).
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
446 program_ctx = converter.ProgramContext(options=options)
--> 447 converted_f = _convert_actual(target_entity, program_ctx)
448 if logging.has_verbosity(2):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in _convert_actual(entity, program_ctx)
283
--> 284 transformed, module, source_map = _TRANSPILER.transform(entity, program_ctx)
285
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\transpiler.py in transform(self, obj, user_context)
285 if inspect.isfunction(obj) or inspect.ismethod(obj):
--> 286 return self.transform_function(obj, user_context)
287
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\transpiler.py in transform_function(self, fn, user_context)
469 # TODO(mdan): Confusing overloading pattern. Fix.
--> 470 nodes, ctx = super(PyToPy, self).transform_function(fn, user_context)
471
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\transpiler.py in transform_function(self, fn, user_context)
362 node = self._erase_arg_defaults(node)
--> 363 result = self.transform_ast(node, context)
364
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in transform_ast(self, node, ctx)
251 unsupported_features_checker.verify(node)
--> 252 node = self.initial_analysis(node, ctx)
253
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in initial_analysis(self, node, ctx)
239 node = qual_names.resolve(node)
--> 240 node = activity.resolve(node, ctx, None)
241 node = reaching_definitions.resolve(node, ctx, graphs)
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py in resolve(node, context, parent_scope)
708 def resolve(node, context, parent_scope=None):
--> 709 return ActivityAnalyzer(context, parent_scope).visit(node)
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\transformer.py in visit(self, node)
444
--> 445 result = super(Base, self).visit(node)
446
G:\anaconda\envs\tensorflow_env\lib\ast.py in visit(self, node)
252 visitor = getattr(self, method, self.generic_visit)
--> 253 return visitor(node)
254
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py in visit_FunctionDef(self, node)
578 # Argument annotartions (includeing defaults) affect the defining context.
--> 579 node = self._visit_arg_annotations(node)
580
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py in _visit_arg_annotations(self, node)
554 self._track_annotations_only = True
--> 555 node = self._visit_arg_declarations(node)
556 self._track_annotations_only = False
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\pyct\static_analysis\activity.py in _visit_arg_declarations(self, node)
559 def _visit_arg_declarations(self, node):
--> 560 node.args.posonlyargs = self._visit_node_list(node.args.posonlyargs)
561 node.args.args = self._visit_node_list(node.args.args)
AttributeError: 'arguments' object has no attribute 'posonlyargs'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-6-8f7d95d574b4> in <module>
20
21 if __name__ == '__main__':
---> 22 main()
<ipython-input-6-8f7d95d574b4> in main()
8 model.compile(loss='mae', optimizer='adam')
9 print(model.summary())
---> 10 model.fit(X_TR, Y_TR, epochs=10)
11
12 # test save/reload model.
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1079 _r=1):
1080 callbacks.on_train_batch_begin(step)
-> 1081 tmp_logs = self.train_function(iterator)
1082 if data_handler.should_sync:
1083 context.async_wait()
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
826 tracing_count = self.experimental_get_tracing_count()
827 with trace.Trace(self._name) as tm:
--> 828 result = self._call(*args, **kwds)
829 compiler = "xla" if self._experimental_compile else "nonXla"
830 new_tracing_count = self.experimental_get_tracing_count()
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
869 # This is the first call of __call__, so we have to initialize.
870 initializers = []
--> 871 self._initialize(args, kwds, add_initializers_to=initializers)
872 finally:
873 # At this point we know that the initialization is complete (or less
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\eager\def_function.py in _initialize(self, args, kwds, add_initializers_to)
724 self._concrete_stateful_fn = (
725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 726 *args, **kwds))
727
728 def invalid_creator_scope(*unused_args, **unused_kwds):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2974 args, kwargs = None, None
2975 with self._lock:
-> 2976 graph_function, _ = self._maybe_define_function(args, kwargs)
2977 return graph_function
2978
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs)
3369
3370 self._function_cache.missed.add(call_context_key)
-> 3371 graph_function = self._create_graph_function(args, kwargs)
3372 self._function_cache.primary[cache_key] = graph_function
3373
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3214 arg_names=arg_names,
3215 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3216 capture_by_value=self._capture_by_value),
3217 self._function_attributes,
3218 function_spec=self.function_spec,
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
988 _, original_func = tf_decorator.unwrap(python_func)
989
--> 990 func_outputs = python_func(*func_args, **func_kwargs)
991
992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds)
632 xla_context.Exit()
633 else:
--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
635 return out
636
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs)
971 recursive=True,
972 optional_features=autograph_options,
--> 973 user_requested=True,
974 ))
975 except Exception as e: # pylint:disable=broad-except
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
452 if is_autograph_strict_conversion_mode():
453 raise
--> 454 return _fall_back_unconverted(f, args, kwargs, options, e)
455
456 with StackTraceMapper(converted_f), tf_stack.CurrentModuleFilter():
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in _fall_back_unconverted(f, args, kwargs, options, exc)
499 logging.warn(warning_template, f, file_bug_message, exc)
500
--> 501 return _call_unconverted(f, args, kwargs, options)
502
503
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in _call_unconverted(f, args, kwargs, options, update_cache)
476
477 if kwargs is not None:
--> 478 return f(*args, **kwargs)
479 return f(*args)
480
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\training.py in train_function(iterator)
788 def train_function(iterator):
789 """Runs a training execution with one step."""
--> 790 return step_function(self, iterator)
791
792 else:
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\training.py in step_function(model, iterator)
778
779 data = next(iterator)
--> 780 outputs = model.distribute_strategy.run(run_step, args=(data,))
781 outputs = reduce_per_replica(
782 outputs, self.distribute_strategy, reduction='first')
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py in run(***failed resolving arguments***)
1266 fn = autograph.tf_convert(
1267 fn, autograph_ctx.control_status_ctx(), convert_by_default=False)
-> 1268 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
1269
1270 def reduce(self, reduce_op, value, axis):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
2732 kwargs = {}
2733 with self._container_strategy().scope():
-> 2734 return self._call_for_each_replica(fn, args, kwargs)
2735
2736 def _call_for_each_replica(self, fn, args, kwargs):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\distribute\distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
3353 def _call_for_each_replica(self, fn, args, kwargs):
3354 with ReplicaContext(self._container_strategy(), replica_id_in_sync_group=0):
-> 3355 return fn(*args, **kwargs)
3356
3357 def _reduce_to(self, reduce_op, value, destinations, experimental_hints):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in wrapper(*args, **kwargs)
665 try:
666 with conversion_ctx:
--> 667 return converted_call(f, args, kwargs, options=options)
668 except Exception as e: # pylint:disable=broad-except
669 if hasattr(e, 'ag_error_metadata'):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
394
395 if not options.user_requested and conversion.is_allowlisted(f):
--> 396 return _call_unconverted(f, args, kwargs, options)
397
398 # internal_convert_user_code is for example turned off when issuing a dynamic
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\autograph\impl\api.py in _call_unconverted(f, args, kwargs, options, update_cache)
476
477 if kwargs is not None:
--> 478 return f(*args, **kwargs)
479 return f(*args)
480
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\training.py in run_step(data)
771
772 def run_step(data):
--> 773 outputs = model.train_step(data)
774 # Ensure counter is updated only if `train_step` succeeds.
775 with ops.control_dependencies(_minimum_control_deps(outputs)):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\training.py in train_step(self, data)
737
738 with backprop.GradientTape() as tape:
--> 739 y_pred = self(x, training=True)
740 loss = self.compiled_loss(
741 y, y_pred, sample_weight, regularization_losses=self.losses)
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
1001 with autocast_variable.enable_auto_cast_variables(
1002 self._compute_dtype_object):
-> 1003 outputs = call_fn(inputs, *args, **kwargs)
1004
1005 if self._activity_regularizer:
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\functional.py in call(self, inputs, training, mask)
423 """
424 return self._run_internal_graph(
--> 425 inputs, training=training, mask=mask)
426
427 def compute_output_shape(self, input_shape):
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\functional.py in _run_internal_graph(self, inputs, training, mask)
558
559 args, kwargs = node.map_arguments(tensor_dict)
--> 560 outputs = node.layer(*args, **kwargs)
561
562 # Update tensor_dict.
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\layers\recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
658
659 if initial_state is None and constants is None:
--> 660 return super(RNN, self).__call__(inputs, **kwargs)
661
662 # If any of `initial_state` or `constants` are specified and are Keras
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
987 inputs = self._maybe_cast_inputs(inputs, input_list)
988
--> 989 input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
990 if eager:
991 call_fn = self.call
G:\anaconda\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\engine\input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
272 ' is incompatible with layer ' + layer_name +
273 ': expected shape=' + str(spec.shape) +
--> 274 ', found shape=' + display_shape(x.shape))
275
276
ValueError: Input 0 is incompatible with layer lstm: expected shape=(None, None, 5), found shape=(None, 5, 1)
I'm not understanding the error here.
Help would be highly appreciated.
Thanks

Related

ERROR: vars() argument must have __dict__ attribute when trying to use trainer.train() on custom HF dataset?

I have the following model that I am trying to fine-tune (CLIP_ViT + classification head). Here’s my model definition:
class CLIPNN(nn.Module):
def __init__(self, num_labels, pretrained_name="openai/clip-vit-base-patch32", dropout=0.1):
super().__init__()
self.num_labels = num_labels
# load pre-trained transformer & processor
self.transformer = CLIPVisionModel.from_pretrained(pretrained_name)
self.processor = CLIPProcessor.from_pretrained(pretrained_name)
# initialize other layers (head after the transformer body)
self.classifier = nn.Sequential(
nn.Linear(512, 128, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=dropout, inplace=False),
nn.Linear(128, self.num_labels, bias=True))
def forward(self, inputs, labels=None, **kwargs):
logits = self.classifier(inputs)
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
return SequenceClassifierOutput(
loss=loss,
logits=logits,
)
I also have the following definition for a dataset:
class CLIPDataset(nn.utils.data.Dataset):
def __init__(self, embeddings, labels):
self.embeddings = embeddings
self.labels = labels
def __getitem__(self, idx):
item = {"embeddings": nn.Tensor(self.embeddings[idx])}
item['labels'] = nn.LongTensor([self.labels[idx]])
return item
def __len__(self):
return len(self.labels)
Note: here I am assuming that the model is fed pre-computed embeddings and does not compute embeddings, I know this is not the right logic if I want to fine-tune the CLIP base model, I am just trying to get my code to work.
Something like this throws an error:
model = CLIPNN(num_labels=2)
train_data = CLIPDataset(train_data, y_train)
test_data = CLIPDataset(test_data, y_test)
trainer = Trainer(
model=model, args=training_args, train_dataset=train_data, eval_dataset=test_data
)
trainer.train()
TypeError Traceback (most recent call last) in
----> 1 trainer.train()
~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/trainer.py
in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval,
**kwargs) 1256 self.control = self.callback_handler.on_epoch_begin(args, self.state, self.control)
1257 → 1258 for step, inputs in enumerate(epoch_iterator): 1259 1260 #
Skip past any already trained steps if resuming training
~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/utils/data/dataloader.py in next(self) 515 if self._sampler_iter is None: 516 self._reset() →
517 data = self._next_data() 518 self._num_yielded += 1 519 if
self._dataset_kind == _DatasetKind.Iterable and \
~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 555 def _next_data(self): 556 index =
self._next_index() # may raise StopIteration → 557 data =
self._dataset_fetcher.fetch(index) # may raise StopIteration 558 if
self._pin_memory: 559 data = _utils.pin_memory.pin_memory(data)
~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py
in fetch(self, possibly_batched_index) 45 else: 46 data =
self.dataset[possibly_batched_index] —> 47 return
self.collate_fn(data)
~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/data/data_collator.py
in default_data_collator(features, return_tensors) 64 65 if
return_tensors == “pt”: —> 66 return
torch_default_data_collator(features) 67 elif return_tensors == “tf”:
68 return tf_default_data_collator(features)
~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/data/data_collator.py
in torch_default_data_collator(features) 80 81 if not
isinstance(features[0], (dict, BatchEncoding)): —> 82 features =
[vars(f) for f in features] 83 first = features[0] 84 batch = {}
~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/data/data_collator.py
in (.0) 80 81 if not isinstance(features[0], (dict, BatchEncoding)):
—> 82 features = [vars(f) for f in features] 83 first = features[0] 84
batch = {}
TypeError: vars() argument must have dict attribute
any idea what I'm doing wrong?

Can't run "Multivariate forecasting" in Pyro tutorial

I am trying just to run the sample program at https://pyro.ai/examples/forecast_simple.html.
It runs until it reaches "RuntimeError torch.linalg.cholesky: For batch 4284: U(2,2) is zero, singular U." Every time when I run the code, it stops at the same location with "batch 4284".
Can anyone teach me how to fix it?
I am using below versions.
Python 3.9.1
pyro-api 0.1.2
pyro-ppl 1.7.0
torch 1.19.0
Windows10Pro 64bit 20H2
VScode 1.60.0
INFO step 0 loss = 7.356
INFO step 50 loss = 1.87751
INFO step 100 loss = 1.55338
INFO step 150 loss = 1.40953
INFO step 200 loss = 1.31982
INFO step 250 loss = 1.2017
INFO step 300 loss = 1.1389
INFO step 350 loss = 1.10407
INFO step 400 loss = 1.07474
INFO step 450 loss = 1.06728
INFO step 500 loss = 1.0285
DEBUG crps = 0.59017
DEBUG mae = 0.866027
DEBUG num_samples = 100
DEBUG rmse = 1.02721
DEBUG seed = 1.23457e+09
DEBUG t0 = 0
DEBUG t1 = 2160
DEBUG t2 = 2496
DEBUG test_walltime = 0.411458
DEBUG train_walltime = 28.8177
DEBUG AutoNormal.locs.obs_corr = -1.62159
DEBUG AutoNormal.locs.trans_corr = 2.49729
DEBUG AutoNormal.locs.trans_loc = 0.904184
DEBUG AutoNormal.scales.obs_corr = 0.207397
DEBUG AutoNormal.scales.trans_corr = 0.0915508
DEBUG AutoNormal.scales.trans_loc = 0.0111603
INFO Training on window [168:2328], testing on window [2328:2664]
INFO step 0 loss = 7.37245
INFO step 50 loss = 1.87162
:
:
:
DEBUG crps = 0.62036
DEBUG mae = 0.907584
DEBUG num_samples = 100
DEBUG rmse = 1.08631
DEBUG seed = 1.23457e+09
DEBUG t0 = 1512
DEBUG t1 = 3672
DEBUG t2 = 4008
DEBUG test_walltime = 0.404958
DEBUG train_walltime = 26.7937
DEBUG AutoNormal.locs.obs_corr = -0.889496
DEBUG AutoNormal.locs.trans_corr = 1.85566
DEBUG AutoNormal.locs.trans_loc = 0.903074
DEBUG AutoNormal.scales.obs_corr = 0.247679
DEBUG AutoNormal.scales.trans_corr = 0.0577488
DEBUG AutoNormal.scales.trans_loc = 0.012068
INFO Training on window [1680:3840], testing on window [3840:4176]
INFO step 0 loss = 7.48406
INFO step 50 loss = 1.92277
INFO step 100 loss = 1.58563
INFO step 150 loss = 1.52081
INFO step 200 loss = 1.44076
INFO step 250 loss = 1.38033
INFO step 300 loss = 1.29202
INFO step 350 loss = 1.26101
INFO step 400 loss = 1.23141
INFO step 450 loss = 1.23901
INFO step 500 loss = 1.21247
RuntimeError: torch.linalg.cholesky: For batch 4284: U(2,2) is zero, singular U.
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_16928/3907557438.py in <module>
15
16 args = parser.parse_args()
---> 17 main(args)
~\AppData\Local\Temp/ipykernel_16928/4270697941.py in main(args)
24 }
25
---> 26 metrics = backtest(
27 data,
28 covariates,
c:\Users\9033113\venv\lib\site-packages\pyro\contrib\forecast\evaluate.py in backtest(data, covariates, model_fn, forecaster_fn, metrics, transform, train_window, min_train_window, test_window, min_test_window, stride, seed, num_samples, batch_size, forecaster_options)
199 while True:
200 try:
--> 201 pred = forecaster(
202 train_data,
203 test_covariates,
c:\Users\9033113\venv\lib\site-packages\pyro\contrib\forecast\forecaster.py in __call__(self, data, covariates, num_samples, batch_size)
359 :rtype: ~torch.Tensor
360 """
--> 361 return super().__call__(data, covariates, num_samples, batch_size)
362
363 def forward(self, data, covariates, num_samples, batch_size=None):
c:\Users\9033113\venv\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or
_global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
c:\Users\9033113\venv\lib\site-packages\pyro\contrib\forecast\forecaster.py in forward(self, data, covariates, num_samples, batch_size)
388 stack.enter_context(poutine.replay(trace=tr.trace))
389 with pyro.plate("particles", num_samples, dim=dim):
--> 390 return self.model(data, covariates)
391
392
c:\Users\9033113\venv\lib\site-packages\pyro\nn\module.py in __call__(self, *args, **kwargs)
424 def __call__(self, *args, **kwargs):
425 with self._pyro_context:
--> 426 return super().__call__(*args, **kwargs)
427
428 def __getattr__(self, name):
c:\Users\9033113\venv\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or
_global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
c:\Users\9033113\venv\lib\site-packages\pyro\contrib\forecast\forecaster.py in forward(self, data, covariates)
183 self._forecast = None
184
--> 185 self.model(zero_data, covariates)
186
187 assert self._forecast is not None, ".predict() was not called by .model()"
~\AppData\Local\Temp/ipykernel_16928/1541431941.py in model(self, zero_data, covariates)
76
77 # The final statement registers our noise model and prediction.
---> 78 self.predict(noise_model, prediction)
c:\Users\9033113\venv\lib\site-packages\pyro\contrib\forecast\forecaster.py in predict(self, noise_dist, prediction)
155 # PrefixConditionMessenger is handled outside of the .model() call.
156 self._prefix_condition_data["residual"] = data - left_pred
--> 157 noise = pyro.sample("residual", noise_dist)
158 del self._prefix_condition_data["residual"]
159
c:\Users\9033113\venv\lib\site-packages\pyro\primitives.py in sample(name, fn, *args, **kwargs)
162 }
163 # apply the stack and return its return value
--> 164 apply_stack(msg)
165 return msg["value"]
166
c:\Users\9033113\venv\lib\site-packages\pyro\poutine\runtime.py in apply_stack(initial_msg)
215 break
216
--> 217 default_process_message(msg)
218
219 for frame in stack[-pointer:]:
c:\Users\9033113\venv\lib\site-packages\pyro\poutine\runtime.py in default_process_message(msg)
176 return msg
177
--> 178 msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
179
180 # after fn has been called, update msg to prevent it from being called again.
c:\Users\9033113\venv\lib\site-packages\pyro\distributions\torch_distribution.py in __call__(self, sample_shape)
46 """
47 return (
---> 48 self.rsample(sample_shape)
49 if self.has_rsample
50 else self.sample(sample_shape)
c:\Users\9033113\venv\lib\site-packages\pyro\distributions\hmm.py in rsample(self, sample_shape)
582 )
583 trans = trans.expand(trans.batch_shape[:-1] + (self.duration,))
--> 584 z = _sequential_gaussian_filter_sample(self._init, trans, sample_shape)
585 x = self._obs.left_condition(z).rsample()
586 return x
c:\Users\9033113\venv\lib\site-packages\pyro\distributions\hmm.py in _sequential_gaussian_filter_sample(init, trans, sample_shape)
142 joint = (x + y).event_permute(perm)
143 tape.append(joint)
--> 144 contracted = joint.marginalize(left=state_dim)
145 if time > even_time:
146 contracted = Gaussian.cat((contracted, gaussian[..., -1:]), dim=-1)
c:\Users\9033113\venv\lib\site-packages\pyro\ops\gaussian.py in marginalize(self, left, right)
242 P_ba = self.precision[..., b, a]
243 P_bb = self.precision[..., b, b]
--> 244 P_b = cholesky(P_bb)
245 P_a = triangular_solve(P_ba, P_b, upper=False)
246 P_at = P_a.transpose(-1, -2)
c:\Users\9033113\venv\lib\site-packages\pyro\ops\tensor_utils.py in cholesky(x)
398 if x.size(-1) == 1:
399 return x.sqrt()
--> 400 return torch.linalg.cholesky(x)
401
402
RuntimeError: torch.linalg.cholesky: For batch 4284: U(2,2
) is zero, singular U
.

Fastai v2 dataset has no show_batch method

I am having trouble with my Datablock not having show_batch methods when customising to my own use case.
I am trying to port some of my code from fastai v1 to v2. Working through the Datablock tutorial https://docs.fast.ai/tutorial.datablock.html
My Datablock & Dataset:
dblock = DataBlock(get_items = get_image_files,
get_y = parent_label,
splitter = RandomSplitter())
dsets = dblock.datasets("PlantVillage-Dataset/raw/color/")
dsets.train[0] # this works
The error I get when I try dsets.show_batch():
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-56-5a2f74730596> in <module>
----> 1 dsets.show_batch()
~/.pyenv/versions/3.7.8/envs/fastai/lib/python3.7/site-packages/fastai/data/core.py in __getattr__(self, k)
315 return res if is_indexer(it) else list(zip(*res))
316
--> 317 def __getattr__(self,k): return gather_attrs(self, k, 'tls')
318 def __dir__(self): return super().__dir__() + gather_attr_names(self, 'tls')
319 def __len__(self): return len(self.tls[0])
~/.pyenv/versions/3.7.8/envs/fastai/lib/python3.7/site-packages/fastcore/transform.py in gather_attrs(o, k, nm)
163 att = getattr(o,nm)
164 res = [t for t in att.attrgot(k) if t is not None]
--> 165 if not res: raise AttributeError(k)
166 return res[0] if len(res)==1 else L(res)
167
AttributeError: show_batch
dls = dblock.dataloaders(path)
dls.show_batch()
After intialising the Datablock I needed to construct a dataloader for batch construction.

Pystan, Runtime error - Initialization failed

I'm trying to develop a Bayesian model using Pystan. I'm able to compile the model successfully. But when I'm sampling data I'm getting run time error. Refer to the code below:
my_code = '''
data {
int N;
int K1;
int K2;
real max_intercept;
matrix[N, K1] X1;
matrix[N, K2] X2;
vector[N] y;
}
parameters {
vector<lower=0>[K1] beta1;
vector[K2] beta2;
real<lower=0, upper=max_intercept> alpha;
real<lower=0> noise_var;
}
model {
beta1 ~ normal(0, 1);
beta2 ~ normal(0, 1);
noise_var ~ inv_gamma(0.05, 0.05 * 0.01);
y ~ normal(X1*beta1 + X2*beta2 + alpha, sqrt(noise_var));
}
'''
fit1 = sm1.sampling(data=input_data, iter=2000, chains=4, init=0.5,n_jobs=-1) #Getting an error here
I have checked all the data points (no missing data or no column with same number through out) and their data types (all are float 64). I also scaled the data using MinMaxScaler
input_data = {
'N': len(data_scaled), #836
'K1': len(pos_var), #17
'K2': len(pos_neg_var),#29
'X1': X1, #(836,17)
'X2': X2, #(836,17)
'y': data['orders'].values,
'max_intercept': min(data['orders']) #0
}
Below is the error I'm getting.
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\abc\.conda\envs\stan_env\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Users\abc\.conda\envs\stan_env\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "stanfit4anon_model_a396b59aabedfaa132f3a814776a219f_7619586994410633893.pyx", line 371, in stanfit4anon_model_a396b59aabedfaa132f3a814776a219f_7619586994410633893._call_sampler_star
File "stanfit4anon_model_a396b59aabedfaa132f3a814776a219f_7619586994410633893.pyx", line 404, in stanfit4anon_model_a396b59aabedfaa132f3a814776a219f_7619586994410633893._call_sampler
RuntimeError: Initialization failed.
"""
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<timed exec> in <module>
~\.conda\envs\stan_env\lib\site-packages\pystan\model.py in sampling(self, data, pars, chains, iter, warmup, thin, seed, init, sample_file, diagnostic_file, verbose, algorithm, control, n_jobs, **kwargs)
776 call_sampler_args = izip(itertools.repeat(data), args_list, itertools.repeat(pars))
777 call_sampler_star = self.module._call_sampler_star
--> 778 ret_and_samples = _map_parallel(call_sampler_star, call_sampler_args, n_jobs)
779 samples = [smpl for _, smpl in ret_and_samples]
780
~\.conda\envs\stan_env\lib\site-packages\pystan\model.py in _map_parallel(function, args, n_jobs)
83 try:
84 pool = multiprocessing.Pool(processes=n_jobs)
---> 85 map_result = pool.map(function, args)
86 finally:
87 pool.close()
~\.conda\envs\stan_env\lib\multiprocessing\pool.py in map(self, func, iterable, chunksize)
266 in a list that is returned.
267 '''
--> 268 return self._map_async(func, iterable, mapstar, chunksize).get()
269
270 def starmap(self, func, iterable, chunksize=None):
~\.conda\envs\stan_env\lib\multiprocessing\pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
RuntimeError: Initialization failed.
I'm relatively new to Pystan. I appreciate any guidance I get here.
I fixed the issue! Runtime error generally comes when the data is not meeting the constraints defined in the model.
For instance X values having some -ve numbers when the constraint is X>0 defined in the model.
Also most common mistake, need to make sure Y values are not off. In my data there are few Y values that 0, these values passed missing values and pos value checks. Upon imputing the values with mean of Y the problem is resolved.
Happy learning!

Viewing lowered version of Julia code that's not in a function

I was reading this question about performance involving global variables, and wanted to see how exactly Julia translates this code. I realized then that code_warntype can't be used in the usual way here since the code isn't in a function, and wrapping it in a function would defeat the whole point of the exercise.
Is there an analogue or another version of #code_warntype that will take some Julia code directly (eg. as a filename, or as a plain string) and show us the typed lowered version of that code? Or perhaps a command line flag that outputs lowered code? (There seemed to be flags for generating LLVM code or object files, but none for outputting code that's just type inferred and lowered.)
You can always wrap the code in question into a function and call #code_warntype on that.
Here's an example for the linked question:
n=5000;
x=rand(n)
y=rand(n)
mn=ones(n)*1000;
function foo()
for i in 1:n;
for j in 1:n;
c=abs(x[j]-y[i]);
if(c<mn[i])
mn[i]=c;
end
end
end
end
julia> #code_warntype foo()
Variables:
#temp##_2::Any
i::Any
#temp##_4::Any
j::Any
c::Any
Body:
begin
Core.SSAValue(0) = (1:Main.n)::Any
#temp##_2::Any = (Base.start)(Core.SSAValue(0))::Any
3:
Core.SSAValue(1) = (Base.done)(Core.SSAValue(0), #temp##_2::Any)::Any
Core.SSAValue(2) = (Core.typeassert)(Core.SSAValue(1), Core.Bool)::Bool
Core.SSAValue(3) = (Base.not_int)(Core.SSAValue(2))::Bool
unless Core.SSAValue(3) goto 37
Core.SSAValue(4) = (Base.next)(Core.SSAValue(0), #temp##_2::Any)::Any
i::Any = (Core.getfield)(Core.SSAValue(4), 1)::Any
#temp##_2::Any = (Core.getfield)(Core.SSAValue(4), 2)::Any
#= line 2 =#
Core.SSAValue(5) = (1:Main.n)::Any
#temp##_4::Any = (Base.start)(Core.SSAValue(5))::Any
14:
Core.SSAValue(6) = (Base.done)(Core.SSAValue(5), #temp##_4::Any)::Any
Core.SSAValue(7) = (Core.typeassert)(Core.SSAValue(6), Core.Bool)::Bool
Core.SSAValue(8) = (Base.not_int)(Core.SSAValue(7))::Bool
unless Core.SSAValue(8) goto 35
Core.SSAValue(9) = (Base.next)(Core.SSAValue(5), #temp##_4::Any)::Any
j::Any = (Core.getfield)(Core.SSAValue(9), 1)::Any
#temp##_4::Any = (Core.getfield)(Core.SSAValue(9), 2)::Any
#= line 3 =#
Core.SSAValue(10) = (Base.getindex)(Main.x, j::Any)::Any
Core.SSAValue(11) = (Base.getindex)(Main.y, i::Any)::Any
Core.SSAValue(12) = (Core.SSAValue(10) - Core.SSAValue(11))::Any
c::Any = (Main.abs)(Core.SSAValue(12))::Any
#= line 5 =#
Core.SSAValue(14) = (Base.getindex)(Main.mn, i::Any)::Any
Core.SSAValue(15) = (c::Any < Core.SSAValue(14))::Any
unless Core.SSAValue(15) goto 33
#= line 6 =#
(Base.setindex!)(Main.mn, c::Any, i::Any)::Any
33:
goto 14
35:
goto 3
37:
return
end::Nothing
Compare the above with
function foo()
n=5000;
x=rand(n)
y=rand(n)
mn=ones(n)*1000;
for i in 1:n;
for j in 1:n;
c=abs(x[j]-y[i]);
if(c<mn[i])
mn[i]=c;
end
end
end
end
julia> #code_warntype foo()
Variables:
n<optimized out>
x<optimized out>
y<optimized out>
mn::Array{Float64,1}
#temp##_6::Int64
i<optimized out>
#temp##_8::Int64
j<optimized out>
c<optimized out>
#temp##_13::Bool
#temp##_16::Bool
Body:
begin
#= line 3 =#
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand 224
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand 236
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand 235
# meta: location boot.jl Type 397
# meta: location boot.jl Type 389
# meta: location boot.jl Type 380
Core.SSAValue(46) = $(Expr(:foreigncall, :(:jl_alloc_array_1d), Array{Float64,1}, svec(Any, Int64), :(:ccall), 2, Array{Float64,1}, 5000, 5000))
# meta: pop locations (3)
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand! 214
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/RNGs.jl rand! 447
# meta: location array.jl length 137
Core.SSAValue(53) = (Base.arraylen)(Core.SSAValue(46))::Int64
# meta: pop location
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/RNGs.jl _rand! 440
# meta: location int.jl * 54
Core.SSAValue(66) = (Base.mul_int)(8, Core.SSAValue(53))::Int64
# meta: pop location
# meta: location array.jl length 137
Core.SSAValue(67) = (Base.arraylen)(Core.SSAValue(46))::Int64
# meta: pop location
# meta: location int.jl * 54
Core.SSAValue(68) = (Base.mul_int)(8, Core.SSAValue(67))::Int64
# meta: pop location
# meta: location int.jl <= 419
Core.SSAValue(69) = (Base.sle_int)(Core.SSAValue(66), Core.SSAValue(68))::Bool
# meta: pop location
unless Core.SSAValue(69) goto 31
#temp##_13::Bool = true
goto 33
31:
#temp##_13::Bool = false
33:
unless #temp##_13::Bool goto 36
goto 41
36:
# meta: location boot.jl Type 281
Core.SSAValue(71) = $(Expr(:new, :(Core.AssertionError), "sizeof(Float64) * n64 <= sizeof(T) * length(A) && isbits(T)"))
# meta: pop location
(Base.throw)(Core.SSAValue(71))::Union{}
41:
#= line 441 =#
# meta: location gcutils.jl
#= line 81 =#
Core.SSAValue(61) = $(Expr(:gc_preserve_begin, Core.SSAValue(46)))
#= line 82 =#
# meta: location abstractarray.jl pointer 911
# meta: location pointer.jl unsafe_convert 65
Core.SSAValue(74) = $(Expr(:foreigncall, :(:jl_array_ptr), Ptr{Float64}, svec(Any), :(:ccall), 1, Core.SSAValue(46)))
# meta: pop locations (2)
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/RNGs.jl Type 365
Core.SSAValue(79) = $(Expr(:new, Random.UnsafeView{Float64}, Core.SSAValue(74), Core.SSAValue(53)))
# meta: pop location
$(Expr(:invoke, MethodInstance for rand!(::Random.MersenneTwister, ::Random.UnsafeView{Float64}, ::Random.SamplerTrivial{Random.CloseOpen01{Float64},Float64}), :(Random.rand!), :(Random.GLOBAL_RNG), Core.SSAValue(79), :($(QuoteNode(Random.SamplerTrivial{Random.CloseOpen01{Float64},Float64}(Random.CloseOpen01{Float64}()))))))::Random.UnsafeView{Float64}
#= line 83 =#
$(Expr(:gc_preserve_end, Core.SSAValue(61)))
# meta: pop locations (7)
#= line 4 =#
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand 224
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand 236
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand 235
# meta: location boot.jl Type 397
# meta: location boot.jl Type 389
# meta: location boot.jl Type 380
Core.SSAValue(109) = $(Expr(:foreigncall, :(:jl_alloc_array_1d), Array{Float64,1}, svec(Any, Int64), :(:ccall), 2, Array{Float64,1}, 5000, 5000))
# meta: pop locations (3)
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/Random.jl rand! 214
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/RNGs.jl rand! 447
# meta: location array.jl length 137
Core.SSAValue(116) = (Base.arraylen)(Core.SSAValue(109))::Int64
# meta: pop location
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/RNGs.jl _rand! 440
# meta: location int.jl * 54
Core.SSAValue(129) = (Base.mul_int)(8, Core.SSAValue(116))::Int64
# meta: pop location
# meta: location array.jl length 137
Core.SSAValue(130) = (Base.arraylen)(Core.SSAValue(109))::Int64
# meta: pop location
# meta: location int.jl * 54
Core.SSAValue(131) = (Base.mul_int)(8, Core.SSAValue(130))::Int64
# meta: pop location
# meta: location int.jl <= 419
Core.SSAValue(132) = (Base.sle_int)(Core.SSAValue(129), Core.SSAValue(131))::Bool
# meta: pop location
unless Core.SSAValue(132) goto 88
#temp##_16::Bool = true
goto 90
88:
#temp##_16::Bool = false
90:
unless #temp##_16::Bool goto 93
goto 98
93:
# meta: location boot.jl Type 281
Core.SSAValue(134) = $(Expr(:new, :(Core.AssertionError), "sizeof(Float64) * n64 <= sizeof(T) * length(A) && isbits(T)"))
# meta: pop location
(Base.throw)(Core.SSAValue(134))::Union{}
98:
#= line 441 =#
# meta: location gcutils.jl
#= line 81 =#
Core.SSAValue(124) = $(Expr(:gc_preserve_begin, Core.SSAValue(109)))
#= line 82 =#
# meta: location abstractarray.jl pointer 911
# meta: location pointer.jl unsafe_convert 65
Core.SSAValue(137) = $(Expr(:foreigncall, :(:jl_array_ptr), Ptr{Float64}, svec(Any), :(:ccall), 1, Core.SSAValue(109)))
# meta: pop locations (2)
# meta: location /buildworker/worker/package_linux64/build/usr/share/julia/site/v0.7/Random/src/RNGs.jl Type 365
Core.SSAValue(142) = $(Expr(:new, Random.UnsafeView{Float64}, Core.SSAValue(137), Core.SSAValue(116)))
# meta: pop location
$(Expr(:invoke, MethodInstance for rand!(::Random.MersenneTwister, ::Random.UnsafeView{Float64}, ::Random.SamplerTrivial{Random.CloseOpen01{Float64},Float64}), :(Random.rand!), :(Random.GLOBAL_RNG), Core.SSAValue(142), :($(QuoteNode(Random.SamplerTrivial{Random.CloseOpen01{Float64},Float64}(Random.CloseOpen01{Float64}()))))))::Random.UnsafeView{Float64}
#= line 83 =#
$(Expr(:gc_preserve_end, Core.SSAValue(124)))
# meta: pop locations (7)
#= line 5 =#
# meta: location array.jl ones 398
# meta: location array.jl ones 396
# meta: location array.jl ones 395
# meta: location boot.jl Type 389
# meta: location boot.jl Type 380
Core.SSAValue(170) = $(Expr(:foreigncall, :(:jl_alloc_array_1d), Array{Float64,1}, svec(Any, Int64), :(:ccall), 2, Array{Float64,1}, 5000, 5000))
# meta: pop locations (2)
Core.SSAValue(151) = $(Expr(:invoke, MethodInstance for fill!(::Array{Float64,1}, ::Float64), :(Base.fill!), Core.SSAValue(170), 1.0))::Array{Float64,1}
# meta: pop locations (3)
mn::Array{Float64,1} = $(Expr(:invoke, MethodInstance for *(::Array{Float64,1}, ::Int64), :(Main.:*), Core.SSAValue(151), 1000))::Array{Float64,1}
#= line 6 =#
#temp##_6::Int64 = 1
128:
# meta: location range.jl done 457
# meta: location int.jl + 53
Core.SSAValue(197) = (Base.add_int)(5000, 1)::Int64
# meta: pop location
# meta: location promotion.jl == 433
Core.SSAValue(198) = (#temp##_6::Int64 === Core.SSAValue(197))::Bool
# meta: pop locations (2)
Core.SSAValue(4) = (Base.not_int)(Core.SSAValue(198))::Bool
unless Core.SSAValue(4) goto 193
# meta: location range.jl next 456
# meta: location int.jl + 53
Core.SSAValue(203) = (Base.add_int)(#temp##_6::Int64, 1)::Int64
# meta: pop location
Core.SSAValue(246) = #temp##_6::Int64
# meta: pop location
#temp##_6::Int64 = Core.SSAValue(203)
#= line 6 =#
#temp##_8::Int64 = 1
147:
# meta: location range.jl done 457
# meta: location int.jl + 53
Core.SSAValue(229) = (Base.add_int)(5000, 1)::Int64
# meta: pop location
# meta: location promotion.jl == 433
Core.SSAValue(230) = (#temp##_8::Int64 === Core.SSAValue(229))::Bool
# meta: pop locations (2)
Core.SSAValue(9) = (Base.not_int)(Core.SSAValue(230))::Bool
unless Core.SSAValue(9) goto 191
# meta: location range.jl next 456
# meta: location int.jl + 53
Core.SSAValue(235) = (Base.add_int)(#temp##_8::Int64, 1)::Int64
# meta: pop location
Core.SSAValue(245) = #temp##_8::Int64
# meta: pop location
#temp##_8::Int64 = Core.SSAValue(235)
#= line 7 =#
# meta: location array.jl getindex 661
Core.SSAValue(236) = (Base.arrayref)(true, Core.SSAValue(46), Core.SSAValue(245))::Float64
# meta: pop location
# meta: location array.jl getindex 661
Core.SSAValue(237) = (Base.arrayref)(true, Core.SSAValue(109), Core.SSAValue(246))::Float64
# meta: pop location
# meta: location float.jl - 395
Core.SSAValue(238) = (Base.sub_float)(Core.SSAValue(236), Core.SSAValue(237))::Float64
# meta: pop location
# meta: location float.jl abs 517
Core.SSAValue(239) = (Base.abs_float)(Core.SSAValue(238))::Float64
# meta: pop location
#= line 9 =#
# meta: location array.jl getindex 661
Core.SSAValue(240) = (Base.arrayref)(true, mn::Array{Float64,1}, Core.SSAValue(246))::Float64
# meta: pop location
# meta: location float.jl < 450
Core.SSAValue(241) = (Base.lt_float)(Core.SSAValue(239), Core.SSAValue(240))::Bool
# meta: pop location
unless Core.SSAValue(241) goto 189
#= line 10 =#
# meta: location array.jl setindex! 699
(Base.arrayset)(true, mn::Array{Float64,1}, Core.SSAValue(239), Core.SSAValue(246))::Array{Float64,1}
# meta: pop location
189:
goto 147
191:
goto 128
193:
return
end::Nothing

Resources