I have been trying to use scipy.optimize curve_fit using multiple variables. It works fine with the test code I created but when I try to implement this on my actual data I keep getting the following error
TypeError: only arrays length -1 can be converted to python scalars
The shape of the arrays and the data types of their elements in my test code and actual code are exactly the same so I am confused as to why I get this error.
Test code:
import numpy as np
import scipy
from scipy.optimize import curve_fit
def func(x,a,b,c):
return a+b*x[0]**2+c*x[1]
x_0=np.array([1,2,3,4])
x_1=np.array([5,6,7,8])
X=scipy.array([x_0,x_1])
Y=func(X,3.1,2.2,2.1)
popt, pcov=curve_fit(func,X,Y)
Actual code:
f=open("Exp_Fresnal.csv", 'rb')
reader=csv.reader(f)
for row in reader:
Qz.append(row[0])
Ref.append(row[1])
Ref_F.append(row[2])
Qz_arr,Ref_Farr=scipy.array((Qz)),scipy.array((Ref_F))
x=scipy.array([Qz_arr,Ref_Farr]
def func(x,d,sig_int,sig_cp):
return x[1]*(x[0]*d*(math.exp((-sig_int**2)*(x[0]**2)/2)/(1-cmath.exp(complex(0,1)*x[0]*d)*math.exp((-sig_cp**2)*(x[0]**2)/2))))**2
Y=scipy.array((Ref))
popt, pcov=curve_fit(func,x,Y)
EDIT
Here is the full error message
Traceback (most recent call last):
File "DCM_03.py", line 46, in <module>
popt, pcov=curve_fit(func,x,Y)
File "//anaconda/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 651, in curve_fit
res = leastsq(func, p0, args=args, full_output=1, **kwargs)
File "//anaconda/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 377, in leastsq
shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
File "//anaconda/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 26, in _check_func
res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
File "//anaconda/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 453, in _general_function
return function(xdata, *params) - ydata
File "DCM_03.py", line 40, in func
return (0.062/(2*x))**4*(x*d*(math.exp((-sig_int**2)*(x**2)/2)/(1-cmath.exp(complex(0,1)*x*d)*math.exp((-sig_cp**2)*(x**2)/2))))**2
TypeError: only length-1 arrays can be converted to Python scalars
I figured out the issue. The problem for some reason was the use of math.exp and cmath.exp in the fitting function func. In place of these functions I used np.exp(). I am not completely sure the reason why though.
Related
I am classifying images using Streamlit and Python. I am getting an error: SparseCategoricalCrossentropy.__init__() got an unexpected keyword argument 'ignore_class'.
TypeError: SparseCategoricalCrossentropy.__init__() got an unexpected keyword argument 'ignore_class'
Traceback:
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "C:\Users\DELL\Desktop\Gradio\flask and htm\project-folder\backend\org.py", line 18, in <module>
model = load_model()
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 625, in wrapped_func
return get_or_create_cached_value()
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 609, in get_or_create_cached_value
return_value = non_optional_func(*args, **kwargs)
File "C:\Users\DELL\Desktop\Gradio\flask and htm\project-folder\backend\org.py", line 8, in load_model
model = tf.keras.models.load_model('model/my_model.h5')
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\losses.py", line 153, in from_config
return cls(**config)
This is my code.
import streamlit as st
import tensorflow as tf
st.set_option('deprecation.showfileUploaderEncoding', False)
#st.cache(allow_output_mutation=True)
def load_model():
model = tf.keras.models.load_model('model/my_model.h5')
return model
model = load_model()
st.write("""
# Image Classification App
"""
)
file = st.file_uploader("Please upload an image", type=["jpg", "png"])
import cv2
from PIL import Image, ImageOps
import numpy as np
def import_and_predict(image_data, model):
size = (180,180)
image =ImageOps.fit(image_data, size, Image.LANCZOS)
img =np.asarray(image)
img_reshape = img[np.newaxis,...]
prediction = model.predict(img_reshape)
return prediction
if file is None:
st.text("Please upload an image file")
else:
image = Image.open(file)
st.image(image, use_column_width=True)
predictions = import_and_predict(image, model)
class_names = ['dog', 'cat', 'horse']
string = "This image most likely is a :" +class_names[np.argmax(predictions)]
st.success(string)
Try loading the model with compile=False. This may also be due to a version mismatch.
Try it out,
def load_model():
model = tf.keras.models.load_model('model/my_model.h5', compile=False)
return model
It will load the model only for inference. If you still face issues then it may be a version conflict. The model was saved in a different version of TensorFlow.
I'm currently working on a parser to make a small preview of a page from a URL given by the user in PHP.
I'd like to retrieve only the title of the page and a little chunk of information (a bit of text)
The project: for a list of meta-data of popular wordpress-plugins and gathering the first 50 URLs - that are 50 plugins which are of interest! The challenge is: i want to fetch meta-data of all the existing plugins. What i subsequently want to filter out after the fetch is - those plugins that have the newest timestamp - that are updated (most) recently. It is all aobut acutality...
https://wordpress.org/plugins/wp-job-manager
https://wordpress.org/plugins/ninja-forms
import requests
from bs4 import BeautifulSoup
from concurrent.futures.thread import ThreadPoolExecutor
url = "https://wordpress.org/plugins/browse/popular/{}"
def main(url, num):
with requests.Session() as req:
print(f"Collecting Page# {num}")
r = req.get(url.format(num))
soup = BeautifulSoup(r.content, 'html.parser')
link = [item.get("href")
for item in soup.findAll("a", rel="bookmark")]
return set(link)
with ThreadPoolExecutor(max_workers=20) as executor:
futures = [executor.submit(main, url, num)
for num in [""]+[f"page/{x}/" for x in range(2, 50)]]
allin = []
for future in futures:
allin.extend(future.result())
def parser(url):
with requests.Session() as req:
print(f"Extracting {url}")
r = req.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
target = [item.get_text(strip=True, separator=" ") for item in soup.find(
"h3", class_="screen-reader-text").find_next("ul").findAll("li")[:8]]
head = [soup.find("h1", class_="plugin-title").text]
new = [x for x in target if x.startswith(
("V", "Las", "Ac", "W", "T", "P"))]
return head + new
with ThreadPoolExecutor(max_workers=50) as executor1:
futures1 = [executor1.submit(parser, url) for url in allin]
for future in futures1:
print(future.result())
see the results:
Extracting https://wordpress.org/plugins/tuxedo-big-file-uploads/Extracting https://wordpress.org/plugins/cherry-sidebars/
Extracting https://wordpress.org/plugins/meks-smart-author-widget/
Extracting https://wordpress.org/plugins/wp-limit-login-attempts/
Extracting https://wordpress.org/plugins/automatic-translator-addon-for-loco-translate/
Extracting https://wordpress.org/plugins/event-organiser/
Traceback (most recent call last):
File "/home/martin/unbenannt0.py", line 45, in <module>
print(future.result())
File "/home/martin/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/home/martin/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/martin/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/martin/unbenannt0.py", line 34, in parser
"h3", class_="screen-reader-text").find_next("ul").findAll("li")[:8]]
AttributeError: 'NoneType' object has no attribute 'find_next'
well i have a severe error - the
AttributeError: 'NoneType' object has no attribute 'find_next'
It looks like soup.find("h3", class_="screen-reader-text") has not found anything.
Well we could either break this line up and only call find_next if there was a result or use a try/except that captures the AttributeError.
at the moment i do not know how to fix this whole thing - only that we can surround the offending code with:
try:
code that causes error
except AttributeError:
print(f"Attribution error on {some data here}, {whatever else would be of value}, {...}")
... whatever action is thinkable to take here.
btw.- besides this error i want to add a option that gives the results back: see complete and unaltered error traceback. It contains valuable process call stack information.
Extracting https://wordpress.org/plugins/automatic-translator-addon-for-loco-translate/
Extracting https://wordpress.org/plugins/wpforo/Extracting https://wordpress.org/plugins/accesspress-social-share/
Extracting https://wordpress.org/plugins/mailoptin/
Extracting https://wordpress.org/plugins/tuxedo-big-file-uploads/
Extracting https://wordpress.org/plugins/post-snippets/
Extracting https://wordpress.org/plugins/woocommerce-payfast-gateway/Extracting https://wordpress.org/plugins/woocommerce-grid-list-toggle/
Extracting https://wordpress.org/plugins/goodbye-captcha/
Extracting https://wordpress.org/plugins/gravity-forms-google-analytics-event-tracking/
Traceback (most recent call last):
File "/home/martin/dev/wordpress_plugin.py", line 44, in <module>
print(future.result())
File "/home/martin/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/home/martin/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/martin/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/martin/dev/wordpress_plugin.py", line 33, in parser
"h3", class_="screen-reader-text").find_next("ul").findAll("li")[:8]]
AttributeError: 'NoneType' object has no attribute 'find_next'
hope that this was not too long and complex - thank you for the help!
I have trained and saved some NER models using
torch.save(model)
I need to load these model files (extension .pt) for evaluation using
torch.load('PATH_TO_MODEL.pt')
And I get the following error: 'BertConfig' object has no attribute 'return_dict'
For the same, I updated my transformer package to the latest one, but the error persists.
This is the stack trace:
Traceback (most recent call last):
File "/home/systematicReviews/train_mtl_3.py", line 523, in <module>
test_loss, test_cr, test_cr_fine = evaluate_i(test_model, optimizer, scheduler, validation_dataloader, args, device)
File "/home/systematicReviews/train_mtl_3.py", line 180, in evaluate_i
e_loss_coarse, e_output, e_labels, e_loss_fine, e_f_output, e_f_labels, mask, e_cumulative_loss = defModel(args, e_input_ids, attention_mask=e_input_mask, P_labels=e_labels, P_f_labels=e_f_labels)
File "/home/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/systematicReviews/models/mtl/model.py", line 122, in forward
attention_mask = attention_mask
File "/home/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/transformers/modeling_bert.py", line 784, in forward
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
File "/home/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/transformers/configuration_utils.py", line 219, in use_return_dict
return self.return_dict and not self.torchscript
AttributeError: 'BertConfig' object has no attribute 'return_dict'
Here is some more information about my system:
- `transformers` version: 3.1.0
- Platform: Linux-4.4.0-186-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
It worked pretty fine until now, but suddenly this bug appears. Any help or hint is appreciated.
Try to save your model with model.save_pretrained(output_dir). Then you can load your model with model = *.from_pretrained(output_dir) where * is the model class (e.g. BertForTokenClassification).
To save model dictionary rather than an entire model is slightly different. Instead of torch.save(model) use torch.save('path_to_the_model/model.pth') and load using torch.load('path_to_the_model/model.pth').
I have created a custom python sync block for use in a gnuradio flowgraph. The block tests for invalid input and, if found, raises a ValueError exception. I would like to create a unit test to verify that the exception is raised when the block indeed receives invalid input data.
As part of the python-based qa test for this block, I created a flowgraph such that the block receives invalid data. When I run the test, the block does appear to raise the exception but then hangs.
What is the appropriate way to test for this? Here is a minimal working example:
#!/usr/bin/env python
import numpy as np
from gnuradio import gr, gr_unittest, blocks
class validate_input(gr.sync_block):
def __init__(self):
gr.sync_block.__init__(self,
name="validate_input",
in_sig=[np.float32],
out_sig=[np.float32])
self.max_input = 100
def work(self, input_items, output_items):
in0 = input_items[0]
if (np.max(in0) > self.max_input):
raise ValueError('input exceeds max.')
validated_in = output_items[0]
validated_in[:] = in0
return len(output_items[0])
class qa_validate_input (gr_unittest.TestCase):
def setUp (self):
self.tb = gr.top_block ()
def tearDown (self):
self.tb = None
def test_check_valid_data(self):
src_data = (0, 201, 92)
src = blocks.vector_source_f(src_data)
validate = validate_input()
snk = blocks.vector_sink_f()
self.tb.connect (src, validate)
self.tb.connect (validate, snk)
self.assertRaises(ValueError, self.tb.run)
if __name__ == '__main__':
gr_unittest.run(qa_validate_input, "qa_validate_input.xml")
which produces:
DEPRECATED: Using filename with gr_unittest does no longer have any effect.
handler caught exception: input exceeds max.
Traceback (most recent call last):
File "/home/xxx/devel/gnuradio3_8/lib/python3.6/dist-packages/gnuradio/gr/gateway.py", line 60, in eval
try: self._callback()
File "/home/xxx/devel/gnuradio3_8/lib/python3.6/dist-packages/gnuradio/gr/gateway.py", line 230, in __gr_block_handle
) for i in range(noutputs)],
File "qa_validate_input.py", line 21, in work
raise ValueError('input exceeds max.')
ValueError: input exceeds max.
thread[thread-per-block[1]: <block validate_input(2)>]: SWIG director method error. Error detected when calling 'feval_ll.eval'
^CF
======================================================================
FAIL: test_check_valid_data (__main__.qa_validate_input)
----------------------------------------------------------------------
Traceback (most recent call last):
File "qa_validate_input.py", line 47, in test_check_valid_data
self.assertRaises(ValueError, self.tb.run)
AssertionError: ValueError not raised by run
----------------------------------------------------------------------
Ran 1 test in 1.634s
FAILED (failures=1)
The top_block's run() function does not call the block's work() function directly but starts the internal task scheduler and its threads and waits them to finish.
One way to unit test the error handling in your block is to call the work() function directly
def test_check_valid_data(self):
src_data = [[0, 201, 92]]
output_items = [[]]
validate = validate_input()
self.assertRaises(ValueError, lambda: validate.work(src_data, output_items))
I've been playing with Python 2.7 for a while, and I'm now tryin to make my own Encryption/Decryption algorithm.
I'm trying to make it support non-Ascii characters.
So this is a part of the dictionnary :
... u'\xe6': '1101100', 'i': '0001000', u'\xea': '1100001', 'm': '0001100', u'\xee': '1100111', 'q': '0010000', 'u': '0010100', u'\xf6': '1110010', 'y': '0011000', '}': '1001111'}
But when I try to convert, by exemple, "é" into binairy, doing
base64 = encrypt[i]
where as encrypt is the name of the dic and i = u"é"
I get this error :
Warning (from warnings module):
File "D:\DeskTop 2\Programs\Projects\4.py", line 174
base64 = encrypt[i]
UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
Traceback (most recent call last):
File "D:\DeskTop 2\Programs\Projects\4.py", line 197, in
main()
File "D:\DeskTop 2\Programs\Projects\4.py", line 196, in main
decryption(key, encrypt, decrypt)
File "D:\DeskTop 2\Programs\Projects\4.py", line 174, in decryption
base64 = encrypt[i]
KeyError: '\xf1'
Also, I did start with
# -*- coding: utf-8-*-
Alright, sorry for the useless post.
I found the fix. Basically, I did :
for i in user_input:
base64 = encrypt[i]
but i would be like \0xe
I added
j = i.decode("latin-1")
so j = u"\0xe"
And now it works :D