AttributeError: module 'keras' has no attribute 'initializers' - initialization

I am trying to introduce keras.initializers into my net, following this link:
import keras
from keras.optimizers import RMSprop, Adam
from keras.layers import Input, Embedding, LSTM, Dense, merge, Activation
from keras.models import Model, Sequential
model = Sequential()
model.add(Dense(100, init='lecun_uniform', input_shape=(6,)))
model.add(Activation('relu'))
model.add(Dense(27, init='lecun_uniform'))
model.add(Activation('linear'))
rms = RMSprop(lr = 0.01)
keras.initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None)
model.compile(loss='mse', optimizer=rms)
And it fails with the following error:
keras.initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None)
AttributeError: module 'keras' has no attribute 'initializers'
Any ideas as to why it happens?

You have to check version of Keras being used. Probable mistake is that you have 1.x.x and trying to use initializers from Keras 2.x.x

Related

ModuleNotFoundError using CaseReader in OpenMDAO

I'm trying to use a recorded case in OpenMDAO contained in "my_file.db"
when i execute the following code:
import openmdao.api as om
cr = om.CaseReader('my_file.db')
I get the following error:
ModuleNotFoundError: No module named 'groups'
'groups' is a folder from the openMDAO code that I used to record the case and now I'm trying to import it from a different directory. How can I redefine the path for om.CaseReader to look for the modules it needs?
try setting your PYTHONPATH, as discussed here:
https://bic-berkeley.github.io/psych-214-fall-2016/using_pythonpath.html
Solved using:
import os
dirname = os.path.dirname(__file__)
import sys
sys.path.append( dirname )

from transformers import TFBertModel, BertConfig, BertTokenizerFast

I am having trouble importing TFBertModel, BertConfig, BertTokenizerFast. I tried the latest version of transformers, tokenizer==0.7.0, and transformers.modeling_bert but they do not seem to work. I get the error
from transformers import TFBertModel, BertConfig, BertTokenizerFast
ImportError: cannot import name 'TFBertModel' from 'transformers' (unknown location)
Any ideas for a fix? Thanks!
Do you have Tensorflow 2 installed? The model you are trying to import required Tensorflow

linear_model.py:1283: RuntimeWarning: invalid value encountered in sqrt return rho, np.sqrt(sigmasq)

i have encountered a big problem when I do exercise of time series analysis with plot_ACF and plot_PACF.
when I run my code, it always gives me warning like this:
linear_model.py:1283: RuntimeWarning: invalid value encountered in
sqrt
return rho, np.sqrt(sigmasq)
and I tried to find solution on Google, but found nothing helpful.
Appreciation if someone could help me fix this warning.
many many thx!
my coding environment: python 3.6.6 win64
statsmodels lib used in my laptop:
statsmodels-0.9.0-cp36-cp36m-win_amd64
and my source code is listed below:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
dta=[10930,10318,10595,10972,7706,6756,9092,10551,9722,10913,11151,8186,6422,
6337,11649,11652,10310,12043,7937,6476,9662,9570,9981,9331,9449,6773,6304,9355,
10477,10148,10395,11261,8713,7299,10424,10795,11069,11602,11427,9095,7707,10767,
12136,12812,12006,12528,10329,7818,11719,11683,12603,11495,13670,11337,10232,
13261,13230,15535,16837,19598,14823,11622,19391,18177,19994,14723,15694,13248,
9543,12872,13101,15053,12619,13749,10228,9725,14729,12518,14564,15085,14722,
11999,9390,13481,14795,15845,15271,14686,11054,10395]
dta_list = np.arange(2001,2091)
data=pd.DataFrame(dta,index=dta_list)
data.plot()
plt.show()
D_data=data.diff(1).dropna()
D_data.plot()
plt.show()
plot_acf(D_data).show()
plot_pacf(D_data).show()

Using R's arima function in python with rpy2

I am using rpy2 to call R functions in python3.4 and I'm struggling with calling the arima function.
import rpy2.robjects as robjects
from rpy2.robjects.packages import importr
import pandas as pd
from rpy2.robjects import pandas2ri
ts=robjects.r('ts')
forecast=importr('forecast')
pandas2ri.activate()
traindf=pd.read_csv('MARUTI.NS.csv',index_col=0)
traindf.index=traindf.index.to_datetime()
rdata=ts(traindf.Close,frequency=1)
fit=forecast.arima(rdata,c=(1,0,0)) # error occurs here
forecast_output=forecast.forecast(fit,h=4,level=(95.0))
print(forecast_output)
Error:
AttributeError: 'InstalledSTPackage' object has no attribute 'arima'.
This error meant that there was no ARIMA function in forecast. You can find it in an R environment using ?ARIMA
The forecast object is forecast.auto_arima. Alternatively, you could import stats and run the arima.
stats = importr("stats")
fit=stats.arima(rdata,c=(1,0,0)) \

Access parameter names in torch

I need to convert a torch model to pytorch. Since the torch model has layers that pytorch doesn't support(such as inception and LRN), it is not possible to use the build-in APIs. In order to convert such models from torch to pytorch, it is necessary to implement such layers in pytorch and save all the parameters from torch model as hdf5 file, and reload them to python as a dictionary. I'm new to lua and I would like to ask how to access the 'nickname' of all the parameters in torch.
btw, this can be easily done in pytorch, for example:
import torch.nn as nn
model = nn.Sequential(
nn.Conv2d(in_channels=3,out_channels=32,kernel_size=7,stride=1,bias=False),
nn.ReLU(inplace=True),
nn.BatchNorm2d(num_features=32,affine=True),
nn.MaxPool2d(kernel_size=2,stride=2)
)
for key in model.state_dict():
value = model.state_dict().get(key)
print(key, value.size())
if all the parameters are accessible in a dictionary format, reconstructing a model in pytorch can be done in the following code:
model = MyNewInceptionModel()
model.load_state_dict(param_dict)

Resources