TypeError: Expected bytes While printing Any Report Using Client Database in OpenERP 7.0 - report

I am using Client Database and it will be restored successfully in my local system and working fine but when I am printing any report the within that database at that time.
I got the below traceback from the terminal.
Traceback (most recent call last):
File "/home/best/workspace/dynaweld/web/addons/web/http.py", line 285, in dispatch
r = method(self, **self.params)
File "/home/best/workspace/dynaweld/web/addons/web/controllers/main.py", line 1769, in index
cookies={'fileToken': int(token)})
File "/home/best/workspace/dynaweld/web/addons/web/http.py", line 332, in make_response
response.set_cookie(k, v)
File "/usr/local/lib/python2.7/dist-packages/Werkzeug-0.10.4-py2.7.egg/werkzeug/wrappers.py", line 1008, in set_cookie
self.charset))
File "/usr/local/lib/python2.7/dist-packages/Werkzeug-0.10.4-py2.7.egg/werkzeug/http.py", line 920, in dump_cookie
value = to_bytes(value, charset)
File "/usr/local/lib/python2.7/dist-packages/Werkzeug-0.10.4-py2.7.egg/werkzeug/_compat.py", line 106, in to_bytes
raise TypeError('Expected bytes')
TypeError: Expected bytes
I have tried the following way to resolve above traceback issue but I have not yet succeed.
1. Try remove the unwanted data from my local client database remove the all the data of mail.message object.
2. Remove all the unnecessary database from my system and using only 2-3 database for my OpenERP Server Run.
3. Clean my pc for unwanted files and other detail which was not relevant for my database.
4. I have also check with my enough memory space but I have that enough space for restoring that database file.
Can any one help me how can i fix this issue.

This is because cookies are not intended to support unicode characters, you must use a decoded variable in the cookie you are trying to set. something like :
set_cookie(k, bytes(v))
or at least send your variable as bytes.

I have fixed this by installing an older version of werkzeug, 0.6.2

Related

GCP Composer v1.18.6 and 2.0.10 incompatible with CloudSqlProxyRunner

In my Composer Airflow DAGs, I have been using the CloudSqlProxyRunner to connect to my Cloud SQL instance.
However, after updating Google Cloud Composer from v1.18.4 to 1.18.6, my DAG started to encounter a strange error:
[2022-04-22, 23:20:18 UTC] {cloud_sql.py:462} INFO - Downloading cloud_sql_proxy from https://dl.google.com/cloudsql/cloud_sql_proxy.linux.x86_64 to /home/airflow/dXhOYoU_cloud_sql_proxy.tmp
[2022-04-22, 23:20:18 UTC] {taskinstance.py:1702} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1330, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1457, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1513, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/decorators/base.py", line 134, in execute
return_value = super().execute(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 174, in execute
return_value = self.execute_callable()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 185, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/airflow/gcs/dags/real_time_scoring_pipeline.py", line 99, in get_messages_db
with SQLConnection() as sql_conn:
File "/home/airflow/gcs/dags/helpers/helpers.py", line 71, in __enter__
self.proxy_runner.start_proxy()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 524, in start_proxy
self._download_sql_proxy_if_needed()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 474, in _download_sql_proxy_if_needed
raise AirflowException(
airflow.exceptions.AirflowException: The cloud-sql-proxy could not be downloaded. Status code = 404. Reason = Not Found
Checking manually, https://dl.google.com/cloudsql/cloud_sql_proxy.linux.x86_64 indeed returns a 404.
Looking at the function that raises the exception, _download_sql_proxy_if_needed, it has this code:
system = platform.system().lower()
processor = os.uname().machine
if not self.sql_proxy_version:
download_url = CLOUD_SQL_PROXY_DOWNLOAD_URL.format(system, processor)
else:
download_url = CLOUD_SQL_PROXY_VERSION_DOWNLOAD_URL.format(
self.sql_proxy_version, system, processor
)
So, for whatever reason, in both of these latest images of Composer, processor = os.uname().machine returns x86_64. Previously, it returned amd64, and https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 is in fact a valid link to the binary we need.
I replicated this error in Composer 2.0.10 as well.
I am still investigating possible workarounds, but posting this here in case someone else encounters this issue, and has figured out a workaround, and to raise this with Google engineers (who, according to Composer's docs, monitor this tag).
My current workaround is patching the CloudSqlProxyRunner to hardcode the correct URL:
class PatchedCloudSqlProxyRunner(CloudSqlProxyRunner):
"""
This is a patched version of CloudSqlProxyRunner to provide a workaround for an incorrectly
generated URL to the Cloud SQL proxy binary.
"""
def _download_sql_proxy_if_needed(self) -> None:
download_url = "https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64"
# the rest of the code is taken from the original method
proxy_path_tmp = self.sql_proxy_path + ".tmp"
self.log.info(
"Downloading cloud_sql_proxy from %s to %s", download_url, proxy_path_tmp
)
# httpx has a breaking API change (follow_redirects vs allow_redirects)
# and this should work with both versions (cf. issue #20088)
if "follow_redirects" in signature(httpx.get).parameters.keys():
response = httpx.get(download_url, follow_redirects=True)
else:
response = httpx.get(download_url, allow_redirects=True) # type: ignore[call-arg]
# Downloading to .tmp file first to avoid case where partially downloaded
# binary is used by parallel operator which uses the same fixed binary path
with open(proxy_path_tmp, "wb") as file:
file.write(response.content)
if response.status_code != 200:
raise AirflowException(
"The cloud-sql-proxy could not be downloaded. "
f"Status code = {response.status_code}. Reason = {response.reason_phrase}"
)
self.log.info(
"Moving sql_proxy binary from %s to %s", proxy_path_tmp, self.sql_proxy_path
)
shutil.move(proxy_path_tmp, self.sql_proxy_path)
os.chmod(self.sql_proxy_path, 0o744) # Set executable bit
self.sql_proxy_was_downloaded = True
And then instantiate it and use it as I would the original CloudSqlProxyRunner:
proxy_runner = PatchedCloudSqlProxyRunner(path_prefix, instance_spec)
proxy_runner.start_proxy()
But I am hoping that this is properly fixed by someone at Google soon, by fixing the os.uname().machine value,
or uploading a Cloud SQL proxy binary to the one currently generated in _download_sql_proxy_if_needed.
As mentioned by #enocom this commit to support arm64 download links actually caused a side-effect of generating broken download links. I assume the author of the commit thought that the Cloud SQL Proxy had binaries for each machine type, although in fact there are not Linux x86_64 links.
I have created an airflow PR to hopefully fix the broken links, hopefully it will get merged in soon and resolve this. Will update the thread with any updates.
Update (I've been working with Jack on this): I just merged that PR! When a new version of the providers is added to PyPI, you'll need to add it to your Composer environment. In the meantime, as a workaround, you could take the fix from Jack's PR and use it as a local dependency. (Similar to the other reply here!) If you do this, I highly recommend setting a calendar reminder (maybe a month from now?) to remove the workaround and go back to importing from the provider package, just to make sure you don't miss out on other updates to it! :)

update http_default connection airflow

In the Airflow admin site
When I update the http_default connection the http sensor gives the following error:
ERROR - Could not create Fernet object: Incorrect padding
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 173, in get_fernet
_fernet = Fernet(fernet_key.encode('utf-8'))
File "/usr/local/lib/python3.6/site-packages/cryptography/fernet.py", line 35, in init
key = base64.urlsafe_b64decode(key)
File "/usr/local/lib/python3.6/base64.py", line 133, in urlsafe_b64decode
return b64decode(s)
File "/usr/local/lib/python3.6/base64.py", line 87, in b64decode
return binascii.a2b_base64(s)
binascii.Error: Incorrect padding
It seems your $FERNET_KEY is not set.
Can you check the output of echo $FERNET_KEY?
Can you also check the fernet_key = entry in your airflow.cfg?
If those are empty, you can generate a new one with some Python code:
from cryptography.fernet import Fernet
print(Fernet.generate_key().decode())
Then set this value in your airflow.cfg under fernet_key =.
Alternatively you can also set it via export AIRFLOW__CORE__FERNET_KEY=your_fernet_key (this gives you more flexibility if you are building your environment dynamically).
Important to keep in mind
The Fernet Key is used to encrypt your connections' credentials, so you need to keep it safe it you want to be able to decrypt them later. If you had created some connections before with another fernet key, and you generated a new one as described above, your old connections won't work and will have to be recreated once you set the new key in place.

SQLite3 Disk IO Error

I'm experiencing a disk I/O error during a query on a SQL database. I have several large (95gb) db with the same schema, and am trying to run the same query on all of them. Two run fine, returning ~28,000,000 results; one hits a 'Disk I/O error', both when run via SQLalchemy and command line.
If I limit the query to return ~10,000,000 results, I don't get the error- but I need the full output as these are photons from a large physics simulation.
The full error message in SQLalchemy is:
Traceback (most recent call last):
File "/Users/swm1n12/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/result.py", line 1120, in fetchall
l = self.process_rows(self._fetchall_impl())
File "/Users/swm1n12/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/result.py", line 1071, in _fetchall_impl
return self.cursor.fetchall()
sqlite3.OperationalError: disk I/O error
And for sqlite3 on the command line:
Error: disk I/O error
Can anyone tell me how I'd be able to figure out what's going wrong?
(I'm aware that for DB of this size, SQLite is not a great choice- they turned out to be bigger than I expected and ~academic publishing pressures~ means retooling SQL version and rerunning the code is not practical)
In the sqlite3 command-line shell, you can use .log stderr to see the OS errors.
But "disk I/O error" means that there is an error on your disk. Throw it away, and restore the database from the backup.

Error while running depswriter.py from google closure library

I am trying to build XTK following this link on Linux running on Oracle VirtualBox to get non-minified xtk.js. I am getting following error when I tried to generate the xtk-deps.js on running deps.py file:
Generating dependency file for XTK...
Traceback (most recent call last):
File "/root/Downloads/X-master/lib/google-closure-library/closure/bin/build/depswriter.py", line 212, in <module>
main()
File "/root/Downloads/X-master/lib/google-closure-library/closure/bin/build/depswriter.py", line 196, in main
path_to_source[depspath] = source.Source(source.GetFileContents(srcpath))
File "/root/Downloads/X-master/lib/google-closure-library/closure/bin/build/source.py", line 126, in GetFileContents
return fileobj.read()
File "/usr/lib/python2.7/codecs.py", line 668, in read
return self.reader.read(size)
File "/usr/lib/python2.7/codecs.py", line 474, in read
newchars, decodedbytes = self.decode(data, self.errors)
File "/usr/lib/python2.7/encodings/utf_8_sig.py", line 104, in decode
return codecs.utf_8_decode(input, errors)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x9a in position 4584: invalid start byte
Could not generate dependency file.
Could anybody please explain why this error is coming.
There's probably some non-uft8 characters in your code (most likely in X.js).
Take my experience for example, in the X.js of XTK, I found there's a non-English word (maybe a German or French name) in line #210. What I did is to delete the character and run build.py again. The encode error didn't appear again.
What worked for me is that I used an earlier commit of google closure library for building XTK and it worked perfectly.
I had to search XTK's commit history extensively to know which version of closure library they were using to build it.
PS: Earlier I posted similar solution here. But the post was deleted by moderator so sharing it here again.

wapiti crashes my ASP.NET project. Why? How do i fix it?

Heres one scan of Wapiti. I notice when i had images uploaded (users can upload) i get a crash before Launching module crlf. So just using a fresh instance of my site i ran this and got the result below.
My questions are
1. How do i fix the crashes
2. How might i find out what is causing the crash. I used -v 2 to figure out the url and log them in my app. In both cases i dont see any issues and the project crashes outside of my code
3. How so i solve the unicode warning below?
Wapiti-2.2.1 (wapiti.sourceforge.net)
..............................
Notice
========
This scan has been saved in the file C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src/s
cans/localhost:17357.xml
You can use it to perform attacks without scanning again the web site with the "
-k" parameter
[*] Loading modules :
mod_crlf, mod_exec, mod_file, mod_sql, mod_xss, mod_backup, mod_htaccess
, mod_blindsql, mod_permanentxss, mod_nikto
[+] Launching module crlf
[+] Launching module exec
[+] Launching module file
[+] Launching module sql
C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\attack\mod_sql.py:185: UnicodeWarning:
Unicode equal comparison failed to convert both arguments to Unicode - interpre
ting them as being unequal
if (page, tmp) not in self.attackedPOST:
[+] Launching module xss
Traceback (most recent call last):
File "wapiti.py", line 449, in <module>
wap.attack()
File "wapiti.py", line 266, in attack
x.attack(self.urls, self.forms)
File "C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\attack\attack.py", line 121, i
n attack
self.attackGET(page, dictio, headers)
File "C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\attack\mod_xss.py", line 71, i
n attackGET
self.findXSS(page, {}, "", code, "", payloads, headers["link_encoding"])
File "C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\attack\mod_xss.py", line 306,
in findXSS
dat = self.HTTP.send(url).getPage()
File "C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\net\HTTP.py", line 94, in send
info, data = self.h.request(target, headers = _headers)
File "C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\net\httplib2\__init__.py", lin
e 1084, in request
(response, content) = self._request(conn, authority, uri, request_uri, metho
d, body, headers, redirections, cachekey)
File "C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\net\httplib2\__init__.py", lin
e 888, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, he
aders)
File "C:\unzipped\wapiti-2.2.1\wapiti-2.2.1\src\net\httplib2\__init__.py", lin
e 853, in _conn_request
response = conn.getresponse()
File "C:\dev\bin\Python26\lib\httplib.py", line 974, in getresponse
response.begin()
File "C:\dev\bin\Python26\lib\httplib.py", line 391, in begin
version, status, reason = self._read_status()
File "C:\dev\bin\Python26\lib\httplib.py", line 349, in _read_status
line = self.fp.readline()
File "C:\dev\bin\Python26\lib\socket.py", line 397, in readline
data = recv(1)
socket.error: [Errno 10054] An existing connection was forcibly closed by the re
mote host
Wapiti can crash applications because it uses a lot of your application. Wapiti stack traced when doing an XSS test, and I don't think an xss test can crash an application. However, by submitting a lot of 1 type of request, then this could cause a DoS condition. You need to track down the last request that Wapiti made. Wapiti has a verbose mode, I think its -v and it will print out every request it makes. Once you have the file that is crashing you should review it manually.
Wapiti's blind sql injection attack module uses mysql's benchmark() function which WILL DoS your mysql server, I recommend turning this one off if you are have trouble scanning your entire site.

Resources