py2app: Compiles app but app has error on opening - python-3.6

I am working on a Python application in Python3.6 that I would like to convert into a standalone application that can be ported easily to other devices. I tried using py2app as in this tutorial.
Everything works well until I get to the point of actually creating the app. It does not throw any error in the creation process and creates the .app file, however, when I try to run it, A window pops up saying there is an error and gives me the options of terminating or opening the console. I tried opening the console but I cannot find any substantive information in the error messages.
These are the import statements that I have:
from urllib.request import urlopen, build_opener
from bs4 import BeautifulSoup, SoupStrainer
import ssl
import urllib
import sys
import subprocess
from tkinter import *
from tkinter.ttk import *
import webbrowser
from unidecode import unidecode
As far as I know the only 2 packages that aren't standard with python are bs4 and unidecode. My setup.py file looks like this:
from setuptools import setup
APP = ['GUImain.py']
DATA_FILES = ['logo.png']
OPTIONS = {'argv_emulation': True,
'iconfile': 'logo.png',
'includes': ['undidecode','bs4']}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
I haven't seen any other errors like this one from any of my searching. I have seen some suggestion that py2app doesn't fully support Python3.6. Does anyone know how I can figure out what error is being thrown? Any suggestions on different tools to use and tutorials on how to use them?

Related

Not able to import simpletransformers.ner in python as it says ImportError: cannot import name 'BertweetTokenizer'

I am trying to do bert based NER and trying to import,
from simpletransformers.ner import NERModel, NERArgs
I have installed simpletranformers in pip and when I try to import,
import simpletransformers.ner
it says, ImportError: cannot import name 'BertweetTokenizer'. When I try to install BertweetTokenizer, it throws
ERROR: No matching distribution found for BertweetTokenizer
Not sure, what I am missing. Kindly help

push_notebook does not work in Google Collab Jupyter Notebook

I am using bokeh on Google collab. I wonder if anybody has used push_notebook in Google Collab Jupyter notebook. I am trying to run following code in Jupiter Notebook on Google Collab , but it get stuck on push_notebook() command
from ipywidgets import interact
import numpy as np
from bokeh.io import push_notebook,show,output_notebook
from bokeh.plotting import figure
output_notebook()
x=np.linspace(0,2*np.pi,2000)
y=np.sin(x)
p=figure(title="ff",plot_height=300,plot_width=600,y_range=(-5,5))
r=p.line(x,y,color="red",line_width=2)
def update(f,w=1,A=1,phi=0):
print("fff")
if f== "sin":func=np.sin
if f== "sin":func=np.sin
elif f=="cos":func =np.cos
elif f== "tan":func=np.tan
r.data_source.data['y']=A*func(w*x+phi)
push_notebook()
show(p,notebook_handle=True)
interact(update,f=["sin","cos","tan"],w=(0,100),A=(1,5),phi=(0,20,0.1))
Can anybody suggest whats wrong in the code and how can it be run Google Collab.
push_notebook does not and cannot work on Google Collab due to the fact that Google's notebook implementation will not allow the necessary websocket connections to be opened. There is nothing that can be done about this until/unless Google makes changes on their end.
ref: https://github.com/bokeh/bokeh/issues/9302

how to port python urllib2 app (a web scraper) that uses Beautiful Soup 4 to use requests package instead

I am trying to update web scraper app that uses Beautiful Soup 4 in Python 3 in Anaconda to use the Requests package instead of urllib, urllib2 and urllib3.
urllib and urllib2 don't exist in the Anaconda channels and from what I have read requests package has made urllib and urllib2 obsolete. I am still rather new in Python programming for web scraping, and don't yet fully understand all concepts and internal subtleties of these 4 packages.
When I replace "urllib2.urlopen()" with "requests.get()", I get the following error:
import requests
from bs4 import BeautifulSoup
'''replace the following line with "page = Request.get(url)" '''
# page = urllib2.urlopen(url)
page = requests.get(url)
soup_page = BeautifulSoup(page,"lxml")
I get the following error message with no explanation in the bs4 module:
File "C:\ProgramData\Anaconda3\lib\site-packages\bs4__init__.py", line 246, in init
elif len(markup) <= 256 and (
TypeError: object of type 'Response' has no len()
This error message puts me deep into the bowels of init.py in bs4.
I cannot find an explanation of how to port urllib or urllib2 code to requests with Beautiful Soup 4.
Can anyone provide an explicit guide on how to port urllib / urllib2 apps to use requests with beautiful soup in Python 3?
Anaconda / conda does not import urllib or urllib2 into Python 3 environments.
Thank you.
Rich
The error occurs because you're trying to pass the html code of the response to Beautifulsoup in the wrong way. Pass response.text, instead of the response object:
# page = urllib2.urlopen(url)
page = requests.get(url)
soup_page = BeautifulSoup(page.text, "lxml")
You may need to read requests documentation

Qt Example: com.qmlqb.qmlcomponents is not installed

I am working through an example in the book, "Getting Started with Qt Quick", and the example code has a MainForm.ui.qml with the following import:
import com.qmlqb.qmlcomponents 1.0
When I try to run, I receive this error:
qrc:/MainForm.ui.qml:4 module "com.qmlqb.qmlcomponents" is not installed
What is this import? How can I install it?
So I learned what was going on. I was attempting to access a C++ class within Qt, and it was registered with the following statement in main.cpp:
qmlRegisterType ("Com.qmlqb.qmlcomponents",1,0,"MyClass");
The documentation on qmlRegisterType specifies that the first argument is the name for the library in which the type will be imported.
So, I then tried to import that library in MainForm.ui.qml, with the following import statement, which threw an error:
import com.qmlqb.qmlcomponents 1.0
As you can see, the capitalization is different between the two. After fixing that, everything was fine :)

Using python mechanize to log in websites

I am using python mechanize to log in the social website https://www.pinterest.com/login/. I run into an error:
HTTP Error 403: request disallowed by robots.txt
Here are my simple codes.
import urllib
import re
import mechanize
browser=mechanize.Browser()
browser.open("https://www.pinterest.com/login/")
browser.select_form(nr=0)
browser.form['username_or_email']='xxx#example.com'
browser.form['password']='xxx'
browser.submit()
print browser
What is wrong? Thank you!

Resources