How to fix this error message? tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name - graph

The below code is giving an error message. How do I fix this? Says Graph Execution Error. Facial Emotion detection problem.
history = model3.fit(x= train_set, validation_data = validation_set, batch_size = 32, epochs = 20)
provides the error message. How to fix this?
.
Epoch 1/20
---------------------------------------------------------------------------
UnimplementedError Traceback (most recent call last)
<ipython-input-30-3c911ce633d3> in <module>
1 #history = model3.fit(x=train_set, validation_data = validation_set, epochs = 35)
----> 2 history = model3.fit(x= train_set, validation_data = validation_set, batch_size = 32, epochs = 20)
1 frames
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
UnimplementedError: Graph execution error:
Detected at node 'sequential_4/conv2d_24/Relu' defined at (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 612, in start
self.io_loop.start()
File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 149, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
self._run_once()
File "/usr/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once
handle._run()
File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py", line 743, in _run_callback
ret = callback()
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 787, in inner
self.run()
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 748, in run
yielded = self.gen.send(value)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelbase.py", line 543, in execute_request
self.do_execute(
File "/usr/local/lib/python3.8/dist-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.8/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2854, in run_cell
result = self._run_cell(
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "/usr/local/lib/python3.8/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3057, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "/usr/local/lib/python3.8/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-30-3c911ce633d3>", line 2, in <module>
history = model3.fit(x= train_set, validation_data = validation_set, batch_size = 32, epochs = 20)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 889, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 490, in __call__
return super().__call__(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/sequential.py", line 374, in call
return super(Sequential, self).call(inputs, training=training, mask=mask)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 458, in call
return self._run_internal_graph(
File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 596, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/layers/convolutional/base_conv.py", line 278, in call
return self.activation(outputs)
File "/usr/local/lib/python3.8/dist-packages/keras/activations.py", line 311, in relu
return backend.relu(x, alpha=alpha, max_value=max_value, threshold=threshold)
File "/usr/local/lib/python3.8/dist-packages/keras/backend.py", line 4992, in relu
x = tf.nn.relu(x)
Node: 'sequential_4/conv2d_24/Relu'
Fused conv implementation does not support grouped convolutions for now.
[[{{node sequential_4/conv2d_24/Relu}}]] [Op:__inference_train_function_16666]
This was a facial detection project.
Below is the code used
no_of_classes = 4
model3 = Sequential()
# Add 1st CNN Block
model3.add(Conv2D(64, (2,2), padding = 'same', activation = 'relu', input_shape = (48, 48, 1)))
model3.add(BatchNormalization())
model3.add(LeakyReLU(alpha = 0.2))
model3.add(MaxPooling2D(2,2))
model3.add(Dropout(rate = 0.2))
# Add 2nd CNN Block
model3.add(Conv2D(128, (2,2), padding = 'same', activation = 'relu'))
model3.add(BatchNormalization())
model3.add(LeakyReLU(alpha = 0.2))
model3.add(MaxPooling2D(2,2))
model3.add(Dropout(rate = 0.2))
# Add 3rd CNN Block
model3.add(Conv2D(512, (2,2), padding = 'same', activation = 'relu'))
model3.add(BatchNormalization())
model3.add(LeakyReLU(alpha = 0.2))
model3.add(MaxPooling2D(2,2))
model3.add(Dropout(rate = 0.2))
# Add 4th CNN Block
model3.add(Conv2D(512, (2,2), padding = 'same', activation = 'relu'))
model3.add(BatchNormalization())
model3.add(LeakyReLU(alpha = 0.2))
model3.add(MaxPooling2D(2,2))
model3.add(Dropout(rate = 0.2))
# Add 5th CNN Block
model3.add(Conv2D(256, (2,2), padding = 'same', activation = 'relu'))
model3.add(BatchNormalization())
model3.add(LeakyReLU(alpha = 0.2))
model3.add(MaxPooling2D(2,2))
model3.add(Dropout(rate = 0.2))
model3.add(Conv2D(512, (2,2), padding = 'same', activation = 'relu'))
model3.add(BatchNormalization())
model3.add(LeakyReLU(alpha = 0.2))
model3.add(MaxPooling2D(1,1))
model3.add(Dropout(rate = 0.2))
model3.add(Flatten())
# First fully connected layer
model3.add(Dense(256))
model3.add(LeakyReLU(alpha = 0.2))
model3.add(BatchNormalization())
model3.add(Dropout(rate = 0.2))
# Second fully connected layer
model3.add(Dense(512))
model3.add(LeakyReLU(alpha = 0.2))
model3.add(BatchNormalization())
model3.add(Dropout(rate = 0.2))
# Third fully connected layer
model3.add(Dense(64))
model3.add(LeakyReLU(alpha = 0.2))
model3.add(BatchNormalization())
model3.add(Dropout(rate = 0.2))
model3.add(Dense(no_of_classes, activation = 'softmax'))
model3.summary()
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, CSVLogger
epochs = 35
steps_per_epoch = train_set.n//train_set.batch_size
validation_steps = validation_set.n//validation_set.batch_size
checkpoint = ModelCheckpoint("model3.h5", monitor = 'val_accuracy',
save_weights_only = True, model = 'max', verbose = 1)
reduce_lr = ReduceLROnPlateau(monitor = 'val_loss', factor = 0.1, patience = 2, min_lr = 0.0001 , model = 'auto')
callbacks = [checkpoint, reduce_lr]
model3.compile(optimizer = Adam(learning_rate = 0.001), loss = 'categorical_crossentropy', metrics ='accuracy')

install tensorflow version 2.7 it should work for you , I got same problem and got the solution by installing and using tensorflow version 2.7.
install process:
pip install tensorflow==2.7
in collab :
!pip install tensorflow==2.7

initially I had the same error described above, I installed version 2.7 but now this error is appearing:
Epoch 1/50
InvalidArgumentError Traceback (most recent call last)
in
----> 1 history = model.fit(my_generator, validation_data=validation_datagen, steps_per_epoch=50, validation_steps=50, epochs=50)
1 frames
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
56 try:
57 ctx.ensure_initialized()
---> 58 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
59 inputs, attrs, num_outputs)
60 except core._NotOkStatusException as e:
InvalidArgumentError: scale must have the same number of elements as the channels of x, got 3 and 1
[[node model_2/bn_data/FusedBatchNormV3
(defined at /usr/local/lib/python3.8/dist-packages/keras/layers/normalization/batch_normalization.py:589)
]] [Op:__inference_train_function_13024]
I’m using an algorithm for image segmentation using U-Net. My images are grayscale.
Code:
sm.set_framework('tf.keras')
sm.framework()
model = sm.Unet(BACKBONE, encoder_weights='imagenet')
model.compile('Adam', loss=sm.losses.bce_jaccard_loss, metrics=[sm.metrics.iou_score])
print(model.summary())
history = model.fit(my_generator, validation_data=validation_datagen, steps_per_epoch=50, validation_steps=50, epochs=50)

Related

Load grayscale images into a keras model [in R] #Error in py_call_impl(callable, dots$args, dots$keywords)

I have been trying for ages now to import grayscale images into a keras model. Where I want to differentiate two different clones. For the start I adapted the simple model from: https://shirinsplayground.netlify.app/2018/06/keras_fruits/ to use for my data.
I have the different clones save in folder named "CR1" and "CR6" in the seperate directions.
np_list <- c("CR1","CR6")
output_n <- length(np_list)
img_width <- 100;img_height <- 100
target_size <- c(img_width, img_height)
channels <- 1
implementig images
train_data_gen <- image_data_generator(rescale = 1/255);valid_data_gen <- image_data_generator(rescale = 1/255)
train_image_array_gen <- flow_images_from_directory(train_dir,
train_data_gen ,
target_size = target_size,
class_mode = "binary",
classes=np_list,
color_mode = "grayscale")
valid_image_array_gen <- flow_images_from_directory(valid_dir,
valid_data_gen,
target_size =target_size,
class_mode = "binary",
classes=np_list,
color_mode = "grayscale")
valid_image_array_gen$image_shape
[[1]] [1] 100
[[2]] [1] 100
[[3]] [1] 1
initialize model
model <- keras_model_sequential()
model %>%
layer_conv_2d(filter = 32, kernel_size = c(3,3), padding = "same", input_shape = c(img_width, img_height,channels)) %>%
layer_activation("relu") %>%
layer_conv_2d(filter = 16, kernel_size = c(3,3), padding = "same") %>%
layer_activation_leaky_relu(0.5) %>%
layer_batch_normalization() %>%
layer_max_pooling_2d() %>%
layer_dropout(0.25) %>%
layer_flatten() %>%
layer_dense(100) %>%
layer_activation("relu") %>%
layer_dropout(0.5) %>%
layer_dense(output_n) %>%
layer_activation("sigmoid")
model %>% compile(
loss = "binary_crossentropy",
optimizer = optimizer_rmsprop(learning_rate = 0.0001),
metrics = "accuracy")
hist <- model %>% fit_generator(
train_image_array_gen,
steps_per_epoch = 10,
epochs =50,
validation_data = valid_image_array_gen,
validation_steps = 10
)
Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: in user code:
File "C:\Users\Ruben\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 1021, in train_function *
return step_function(self, iterator)
File "C:\Users\Ruben\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\Ruben\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "C:\Users\Ruben\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "C:\Users\Ruben\AppData\Local\R-MINI~1\envs\R-RETI~1\lib\site-packages\keras\engine\training.py", line 918, in compute_loss
return self.compiled_loss(
File "C:\Users\Ruben\AppData
In addition: Warning message:
In fit_generator(., train_image_array_gen, steps_per_epoch = 10,
As far as I know it is an issue with the format but I don't know where it could be.
Any idea or hint what it could be would be very much appreciated, because I am pretty lost now.
Thanks!

Load datasource in Bokeh chart after a button click

I am building a chart using Bokeh. I need to create an empty Figure and when the page is loaded, if I click on a button, data is loaded and displayed on the chart.
I wrote this code but it doesn't work.
def _op_produttivitaRidge1day(db, custom_attrs, start_date, end_date, postazione, stazione: str, mongo_coll, hours: list, hours2: list, stazione2: str, stazione3: str, cod_avaria: str):
def crosscorr(datax, datay, lag=0, wrap=False):
""" Lag-N cross correlation.
Shifted data filled with NaNs
Parameters
----------
lag : int, default 0
datax, datay : pandas.Series objects of equal length
Returns
----------
crosscorr : float
"""
if wrap:
shiftedy = datay.shift(lag)
shiftedy.iloc[:lag] = datay.iloc[-lag:].values
return datax.corr(shiftedy)
else:
return datax.corr(datay.shift(lag))
# custom data
custom_attrs = {
"OP30": ["ST7_LETTURA_INI_TRASD", "ST7_LETTURA_TRASD", "ST3_CONT_SCARTI_CONS", "ST4_CONT_SCARTI_CONS",
"ST7_CONT_SCARTI_CONS"],
"OP40": ["ST2_VAL_CRIMPATURA", "ST2_VAL_INI_CRIMPATURA", "ST5_CNT_SCARTI_CONS_CHECKER", "ST1_CONT_SCARTI_CONS",
"ST2_CONT_SCARTI_CONS", "ST5_CONT_SCARTI_CONS"],
"OP50": ["ST3_CNT_VETRINO_SALD", "ST1_VAL_PRESSIONE", "ST1_VAL_PERDITA", "ST2_CONT_SCARTI_CONS",
"ST3_CONT_SCARTI_CONS"],
"OP60": ["ST2_LETTURA_INI_TRASD", "ST3_COUNT_CONTROLLO_VETRINO", "ST4_CONT_FOTO1_COGNEX",
"ST4_CONT_FOTO2_COGNEX", "ST4_RIPROVE_SCARTO", "ST1_CONT_SCARTI_CONS", "ST2_CONT_SCARTI_CONS",
"ST3_CONT_SCARTI_CONS", "ST4_CONT_SCARTI_CONS"],
"OP70": ["ST1_CONT_SCARTI_CONS"],
"OP80": ["ST1_COUNT_CONTROLLO_VETRINO", "ST1_CONT_SCARTI_CONS", "ST2_CONT_SCARTI_CONS"],
"OP90": ["ST1_VAL_TRASD_DAMPER", "ST1_VAL_TRASD_INI_DAMPER", "ST1_VAL_TRASD_CUP_INI_DAMPER",
"ST1_VAL_TRASD_CUP_DAMPER", "ST1_CONT_SCARTI_CONS"],
"OP100": [],
"OP110": [],
"OP120": ["ST1_VAL_MISURA_PISTON", "ST1_VAL_TRASD_INI", "ST1_CONT_SCARTI_CONS"]
}
# Produttività Ridge Graph
# day, hour, number of items produced
attr_name = stazione + "_PZ_IN"
attr_value = "$" + attr_name
c5 = db[mongo_coll].aggregate([
{"$match": {"$and": [{"Timestamp": {"$gte": start_date}}, {"Timestamp": {"$lte": end_date}}]}},
# {"$match": {{"$substr": ["$Timestamp" , 0, 10]} : {"$in": days_list}}},
{"$project": {"DAY": {"$substr": ["$Timestamp", 0, 10]}, "HOUR": {"$substr": ["$Timestamp", 11, 4]},
attr_name: 1}},
{"$group": {"_id": {"DAY": "$DAY", "HOUR": "$HOUR"}, "ITEM_COUNT": {"$sum": attr_value}}},
{"$project": {"_id": 0, "DAY": "$_id.DAY", "HOUR": "$_id.HOUR", "ITEM_COUNT": 1}},
{"$sort": {"DAY": 1, "HOUR": 1}}
])
c5_df = pd.DataFrame(list(c5))
days = sorted(c5_df["DAY"].unique())
PR = figure(title="Produzione " + postazione + " " + stazione, y_range=days, plot_width=900, x_range=(-2, 208),
toolbar_location=None)
bt = Button(
label="Show data",
button_type="success",
width=50
)
def createPR(FIG, rm_num_values):
c5_df['ITEM_COUNT_RM'] = c5_df.iloc[:, 0].rolling(window=rm_num_values).mean().fillna(0)
for d in days:
y = []
for h in hours:
try:
# hf = str(int(h)).zfill(2)
hf = h
num_items = (c5_df.loc[c5_df["HOUR"] == hf].loc[c5_df["DAY"] == d].iloc[0]["ITEM_COUNT_RM"]) / 80
# print(hf,d,num_items)
if num_items > 1:
num_items = 1
except:
num_items = 0
y.append([d, num_items])
# print(y)
s5 = ColumnDataSource(data=dict(x=range(146)))
s5.add(y, d)
FIG.patch('x', d, source=s5)
FIG.outline_line_color = None
FIG.background_fill_color = "#efefef"
FIG.y_range.range_padding = 0.12
FIG.xaxis.ticker = FixedTicker(
ticks=[0, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 78, 84, 90, 96, 102, 108, 114, 120, 126, 132, 138])
FIG.xaxis.major_label_overrides = {0: '0', 6: '1', 12: '2', 18: '3', 24: '4', 30: '5', 36: '6', 42: '7', \
48: '8', 54: '9', 60: '10', 66: '11', 72: '12', 78: '13', 84: '14', 90: '15', \
96: '16', 102: '17', 108: '18', 114: '19', 120: '20', 126: '21', 132: '22',
138: '23'}
FIG.outline_line_color = background
FIG.border_fill_color = background
FIG.background_fill_color = background
return FIG
#PR = createPR(PR, 3)
bt.on_click(createPR(PR, 3))
return column(bt, PR)
This code was written for Bokeh v2.1.1. Run it with bokeh serve -show main.py.
Replace data = dict(x=[1, 2, 3, 2], y=[6, 7, 2, 2]) with your data acquisition function: data = _op_produttivitaRidge1day(db, custom_attrs, start_date, end_date, ...) Everything should work if your function returns a dictionary with x and y vectors of equal length.
The approach is to first pass an empty data_source to the figure and then after a button click, download the data from your MongoDB and assign it as dictionary to the patch.data_source.data.
main.py:
from bokeh.plotting import curdoc, figure
from bokeh.models import Column, Button
plot = figure()
patch = plot.patch(x=[], y=[])
def callback():
data = dict(x=[1, 2, 3, 2], y=[6, 7, 2, 2]) # <-- your data acquisition function comes here
patch.data_source.data = data
button = Button()
button.on_click(callback)
curdoc().add_root(Column(plot, button))

Why does Keras perceive my input_shape as three dimensional when I only have two inputs?

This is the ValueError that I'm currently experiencing:
ValueError: Input 0 of layer sequential_10 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 9]
It should be noted that I am working out of R using Keras, Caret, and Tidyverse.
I'm currently building a neural network model on the Titanic dataset in Kaggle.
This is the link to my Kaggle notebook and dataset.
Following this link will provide you with all of the code I have written so far as well as the complete dataset that I am working with.
From the ValueError that was thrown I understand that there is an issue with how I am describing the shape of my X.train data. Although, I'm not sure how to shape this data so that my model runs smoothly.
The following is how I have begun to build my model:
#Build Model
input_shape <- shape(ncol(X.train), nrow(X.train))
model <- keras_model_sequential()
model %>%
layer_batch_normalization(input_shape = input_shape) %>% #Normalization Layer
layer_dense(units = 256, activation = 'relu') %>% #First Layer
layer_batch_normalization() %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 256, activation = 'relu') %>% #Second layer
layer_batch_normalization() %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 256, activation = 'relu') %>% #Third layer
layer_batch_normalization() %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 1, activation = 'sigmoid') %>% #Output Layer
compile(
loss = 'binary_crossentropy',
optimizer = 'adam',
metrics = c('accuracy'))
This is the full error that I am experiencing:
ValueError: Input 0 of layer sequential_10 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 9]
Detailed traceback:
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 697, in _initialize
*args, **kwds))
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3075, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
Traceback:
1. model %>% fit(X.train, y.train, epochs = 500, batch_size = 200,
. validation_split = 0.3, callbacks = list(callback_early_stopping(monitor = "val_loss",
. mode = "auto", patience = 5, min_delta = 0.001, restore_best_weights = TRUE)))
2. withVisible(eval(quote(`_fseq`(`_lhs`)), env, env))
3. eval(quote(`_fseq`(`_lhs`)), env, env)
4. eval(quote(`_fseq`(`_lhs`)), env, env)
5. `_fseq`(`_lhs`)
6. freduce(value, `_function_list`)
7. withVisible(function_list[[k]](value))
8. function_list[[k]](value)
9. fit(., X.train, y.train, epochs = 500, batch_size = 200, validation_split = 0.3,
. callbacks = list(callback_early_stopping(monitor = "val_loss",
. mode = "auto", patience = 5, min_delta = 0.001, restore_best_weights = TRUE)))
10. fit.keras.engine.training.Model(., X.train, y.train, epochs = 500,
. batch_size = 200, validation_split = 0.3, callbacks = list(callback_early_stopping(monitor = "val_loss",
. mode = "auto", patience = 5, min_delta = 0.001, restore_best_weights = TRUE)))
11. do.call(object$fit, args)
12. (structure(function (...)
. {
. dots <- py_resolve_dots(list(...))
. result <- py_call_impl(callable, dots$args, dots$keywords)
. if (convert)
. result <- py_to_r(result)
. if (is.null(result))
. invisible(result)
. else result
. }, class = c("python.builtin.method", "python.builtin.object"
. ), py_object = <environment>))(batch_size = 200L, epochs = 500L,
. verbose = 1L, callbacks = list(<environment>, <environment>),
. validation_split = 0.3, shuffle = TRUE, class_weight = NULL,
. sample_weight = NULL, initial_epoch = 0L, x = <environment>,
. y = <environment>)
13. py_call_impl(callable, dots$args, dots$keywords)
Error in py_call_impl(callable, dots$args, dots$keywords): ValueError: in user code: /usr/local/share/.virtualenvs/r-reticulate/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /usr/local/share/.virtual
keyboard_arrow_right
Thank you, I appreciate your feedback.

Difference between ggplot and grid.picture with complex shapes

I wish to obtain the x/y coordinates of individual letters and plot them with ggplot.
I am using grImport::PostScriptTrace to obtain an XML file from a Postscript file. From there I extract x, y coordinates from the S4 object of class Picture.
Plotting the letter with grid.picture works well:
Using my method to obtain x, y coordinates and using ggplot doesn't work well:
Removing the last row of the dataframe helps a little:
The XML file for the letter "g" is on Dropbox.
How can I use ggplot to plot letters without the erroneous lines?
Here is the code.
# Difference between ggplot and grid.picture
library(grImport)
library(tidyverse)
letter_xml <- readRDS("letter_g")
# Plot letter with grid.picture
grid.picture(letter_xml)
####################################
# Extract coordinates from Picture object
x <- letter_xml#paths$text#letters[1]$path#x
y <- letter_xml#paths$text#letters[1]$path#y
one_letter <- tibble(
x,
y,
id = 1
)
ggplot(one_letter, aes(x = x, y = y)) +
geom_polygon()
# Remove last row
one_letter <- one_letter[1:(nrow(one_letter) - 1),]
ggplot(one_letter, aes(x = x, y = y)) +
geom_polygon()
Try this:
x <- letter_xml#paths$text#letters[1]$path#x
y <- letter_xml#paths$text#letters[1]$path#y
one_letter <- tibble(
x = x,
y = y,
x.n = names(x)
# id is not necessary here
)
library(ggpolypath)
one_letter %>%
mutate(is.move = x.n == "move") %>%
mutate(section.id = cumsum(is.move)) %>%
group_by(section.id) %>%
mutate(section.length = n()) %>%
ungroup() %>%
filter(section.length >= 3) %>%
ggplot(aes(x = x, y = y, group = section.id)) +
geom_polypath()
Explanation:
When I examine the letter_xml#paths$text#letters[1]$path, I noticed that the x / y are identical named vectors, of the form c("move", "line", ..., "line", "move", "line", ..., "line", "move").
> all.equal(names(x), names(y))
[1] TRUE
> table(names(x))
line move
169 4
Given the letter shape we are working with, I suspected that each new "move" could indicate the start of a new segment. E.g. first segment corresponds to the outline, second segment corresponds to a hole, and so on.
I tested this theory by plotting the sequence of positions (row.id), and changing the colour for every new "move":
one_letter %>%
mutate(row.id = seq(1, n())) %>% # sequence of x/y coordinates
mutate(is.move = x.n == "move") %>% # TRUE for every new "move", FALSE o/w
mutate(section.id = cumsum(is.move)) %>% # increments by 1 for every new "move"
ggplot(aes(x = x, y = y, group = section.id,
fill = factor(section.id))) +
geom_label(aes(label = row.id)) +
scale_fill_brewer(palette = "Set1")
As the chart above shows, segments 2 & 3 indeed corresponds to holes in the polygon drawn by segment 1. I'm not sure what's going on with segment 4 (which contains only a single point), but it seems like it doesn't show up in the desired picture anyway. We can add a filter to the pipe operations, keeping only segments with at least 3 points (2 points or fewer can't form a polygon).
geom_polygon doesn't handle polygons with holes well, but the ggpolypath package (available on CRAN) is pretty much tailored for this exact use case, and performs the job just fine.
Data:
> dput(letter_xml)
new("Picture", paths = list(text = new("PictureText", string = c(string = "g"),
w = 54.5977, h = 100, bbox = c(292.688, 8032.13, 345.328,
8110.3), angle = 0, letters = list(path = new("PictureChar",
char = c(char = "g"), x = c(move = 317.422, line = 315.605,
line = 310.16, line = 304.367, line = 300.527, line = 299.141,
line = 299.141, line = 299.141, line = 299.797, line = 301.297,
line = 301.719, line = 300.805, line = 298.199, line = 295.684,
line = 294.172, line = 293.672, line = 293.672, line = 293.672,
line = 294.172, line = 295.684, line = 298.199, line = 300.805,
line = 301.719, line = 300.684, line = 297.75, line = 294.93,
line = 293.246, line = 292.688, line = 292.688, line = 292.688,
line = 294.367, line = 299.203, line = 306.891, line = 314.566,
line = 317.125, line = 319.695, line = 327.41, line = 335.234,
line = 340.207, line = 341.953, line = 341.953, line = 341.953,
line = 340.152, line = 334.797, line = 325.941, line = 316.715,
line = 313.641, line = 312.145, line = 307.656, line = 303.695,
line = 301.5, line = 300.828, line = 300.828, line = 300.828,
line = 301.121, line = 301.906, line = 303.047, line = 304.066,
line = 304.406, line = 305.078, line = 306.82, line = 307.094,
line = 308.059, line = 312.82, line = 317.008, line = 318.406,
line = 320.199, line = 325.586, line = 331.316, line = 335.109,
line = 336.484, line = 336.484, line = 336.484, line = 336.016,
line = 334.609, line = 332.25, line = 329.828, line = 328.938,
line = 328.953, line = 329.332, line = 330.355, line = 332.008,
line = 333.723, line = 334.297, line = 334.863, line = 336.563,
line = 338.102, line = 338.375, line = 338.004, line = 336.723,
line = 336.188, line = 336.188, line = 336.188, line = 336.516,
line = 337.395, line = 338.664, line = 339.793, line = 340.172,
line = 340.664, line = 342.148, line = 343.793, line = 344.91,
line = 345.328, line = 345.328, line = 345.328, line = 344.5,
line = 342.234, line = 338.832, line = 335.664, line = 334.609,
line = 333.41, line = 329.813, line = 326.332, line = 324.152,
line = 323.328, line = 323.281, line = 322.734, line = 318.75,
line = 317.422, line = 317.422, move = 317.719, line = 318.82,
line = 322.137, line = 325.664, line = 327.996, line = 328.844,
line = 328.844, line = 328.844, line = 328.023, line = 325.723,
line = 322.172, line = 318.75, line = 317.609, line = 316.52,
line = 313.258, line = 309.871, line = 307.672, line = 306.891,
line = 306.891, line = 306.891, line = 307.727, line = 310.031,
line = 313.469, line = 316.656, line = 317.719, line = 317.719,
move = 317.813, line = 319.559, line = 324.809, line = 330.023,
line = 333.281, line = 334.406, line = 334.406, line = 334.406,
line = 333.215, line = 329.797, line = 324.387, line = 319.008,
line = 317.219, line = 315.516, line = 310.41, line = 305.215,
line = 301.898, line = 300.734, line = 300.734, line = 300.734,
line = 301.906, line = 305.289, line = 310.66, line = 316.023,
line = 317.813, line = 317.813, move = 344.598), y = c(move = 8101.36,
line = 8101.36, line = 8100.18, line = 8096.9, line = 8091.93,
line = 8087.22, line = 8085.66, line = 8084.56, line = 8081.29,
line = 8078.09, line = 8077.52, line = 8077.23, line = 8075.97,
line = 8073.87, line = 8071.21, line = 8068.79, line = 8067.98,
line = 8067.23, line = 8064.98, line = 8062.39, line = 8060.21,
line = 8058.79, line = 8058.44, line = 8058.05, line = 8056.45,
line = 8053.89, line = 8050.75, line = 8047.95, line = 8047.02,
line = 8045.46, line = 8040.79, line = 8036.11, line = 8033.15,
line = 8032.13, line = 8032.13, line = 8032.13, line = 8033.22,
line = 8036.35, line = 8041.29, line = 8046.18, line = 8047.81,
line = 8049.44, line = 8054.32, line = 8059.05, line = 8061.93,
line = 8062.91, line = 8062.91, line = 8062.91, line = 8063.15,
line = 8063.99, line = 8065.55, line = 8067.38, line = 8067.98,
line = 8068.39, line = 8069.6, line = 8070.93, line = 8071.82,
line = 8072.16, line = 8072.16, line = 8072.16, line = 8071.58,
line = 8071.45, line = 8071.03, line = 8069.53, line = 8068.88,
line = 8068.88, line = 8068.88, line = 8070.09, line = 8073.45,
line = 8078.52, line = 8083.29, line = 8084.88, line = 8085.88,
line = 8088.91, line = 8092.53, line = 8095.71, line = 8097.88,
line = 8098.47, line = 8099.19, line = 8101.34, line = 8103.39,
line = 8104.62, line = 8105.03, line = 8105.03, line = 8105.03,
line = 8104.52, line = 8103.37, line = 8103.05, line = 8102.77,
line = 8101.41, line = 8100.18, line = 8099.77, line = 8099.41,
line = 8098.35, line = 8097.18, line = 8096.39, line = 8096.09,
line = 8096.09, line = 8096.09, line = 8096.54, line = 8097.78,
line = 8099.61, line = 8101.3, line = 8101.86, line = 8102.7,
line = 8105.23, line = 8107.9, line = 8109.66, line = 8110.3,
line = 8110.3, line = 8110.3, line = 8109.68, line = 8107.85,
line = 8104.8, line = 8101.63, line = 8100.56, line = 8100.69,
line = 8101.36, line = 8101.36, line = 8101.36, move = 8094.8,
line = 8094.8, line = 8094.05, line = 8092, line = 8088.89,
line = 8085.95, line = 8084.97, line = 8084.01, line = 8081.13,
line = 8078.12, line = 8076.14, line = 8075.44, line = 8075.44,
line = 8075.44, line = 8076.13, line = 8078.1, line = 8081.17,
line = 8084.17, line = 8085.17, line = 8086.12, line = 8088.97,
line = 8092.03, line = 8094.06, line = 8094.8, line = 8094.8,
line = 8094.8, move = 8056.27, line = 8056.27, line = 8055.66,
line = 8053.93, line = 8051.15, line = 8048.35, line = 8047.42,
line = 8046.52, line = 8043.83, line = 8041.07, line = 8039.3,
line = 8038.67, line = 8038.67, line = 8038.67, line = 8039.27,
line = 8041, line = 8043.75, line = 8046.5, line = 8047.42,
line = 8048.34, line = 8051.1, line = 8053.89, line = 8055.65,
line = 8056.27, line = 8056.27, line = 8056.27, move = 8050
), rgb = "#000000", lty = numeric(0), lwd = 10, lineend = 1,
linejoin = 1, linemitre = 10)), x = 290, y = 8050, rgb = "#000000",
lty = numeric(0), lwd = 10, lineend = numeric(0), linejoin = numeric(0),
linemitre = numeric(0))), summary = new("PictureSummary",
numPaths = 1, xscale = c(xmin = 290, xmax = 345.328), yscale = c(ymin = 8032.13,
ymax = 8110.3)))

R progress bar for reading multiple csv (tsv) files

is there any way to display a progress bar for importing multiple csv-files.
here is the import code:
List all fiels to be imported:
temp <- list.files(pattern="*\\.tsv$")
temp
Specific columns will be imported:
test_data <- lapply(temp,function(x){
read.csv(file = x,
sep ="\t",
fill = TRUE,
quote='',
header = FALSE
)[ ,c(287, 288, 289, 290, 291, 292, 293, 304, 370, 661, 662, 812, 813,994, 995, 1002)]
}
)
How can I monitor the current progress status?
I just did find some advices for loops, but not for importing files
You can achieve this with the progress library:
library(progress) # add
temp <- list.files(pattern="*\\.tsv$")
pb <- progress_bar$new(format = " progress [:bar] :percent eta: :eta", # add
total = length(temp), clear = FALSE, width= 60) # add
test_data <- lapply(temp,function(x){
pb$tick() # add
read.csv(file = x,
sep ="\t",
fill = TRUE,
quote='',
header = FALSE
)[ ,c(287, 288, 289, 290, 291, 292, 293, 304, 370, 661, 662, 812, 813,994, 995, 1002)]
})
I have marked the lines you need to add with a # add comment. There is also a native R progress bar which you can use, but I find the progress version more readable, configurable and easy to use.

Resources