AttributeError: 'module' object has no attribute 'GoogleCredentials'
I have an appengine app which is running on localhost.
I have some tests which i run and i want to use the remote_api to check the db values.
When i try to access the remote_api by visiting:
'http://127.0.0.1:8080/_ah/remote_api'
i get a:
"This request did not contain a necessary header"
but its working in the browser.
When i now try to call the remote_api from my tests by calling
remote_api_stub.ConfigureRemoteApiForOAuth('localhost:35887','/_ah/remote_api')
i get the error:
Error
Traceback (most recent call last):
File "/home/dan/src/gtup/test/test_users.py", line 38, in test_crud
remote_api_stub.ConfigureRemoteApiForOAuth('localhost:35887','/_ah/remote_api')
File "/home/dan/Programs/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 747, in ConfigureRemoteApiForOAuth
credentials = client.GoogleCredentials.get_application_default()
AttributeError: 'module' object has no attribute 'GoogleCredentials'
I did try to reinstall the whole google cloud but this didn't work.
When i open the client.py
google-cloud-sdk/platform/google_appengine/lib/google-api-python-client/oauth2client/client.py
which is used by remote_api_stub.py, i can see, that there is no GoogleCredentials class inside of it.
The GoogleCredentials class exists, but inside of other client.py files which lie at:
google-cloud-sdk/platform/google_appengine/lib/oauth2client/oauth2client/client.py
google-cloud-sdk/platform/gsutil/third_party/oauth2client/oauth2client/client.py
google-cloud-sdk/platform/bq/third_party/oauth2client/client.py
google-cloud-sdk/lib/third_party/oauth2client/client.py
my app.yaml looks like this:
application: myapp
version: 1
runtime: python27
api_version: 1
threadsafe: true
libraries:
- name: webapp2
version: latest
builtins:
- remote_api: on
handlers:
- url: /.*
script: main.app
Is this just a wrong import/bug inside of appengine.
Or am i doing something wrong to use the remote_api inside of my unittests?
I solved this problem by replacing the folder:
../google-cloud-sdk/platform/google_appengine/lib/google-api-python-client/oauth2client
with:
../google-cloud-sdk/platform/google_appengine/lib/oauth2client/oauth2client
the one which gets included in the google-api-python-client folder now has the needed Class: GoogleCredentials in the client file.
Then i had a second problem with the connection and now i have to call:
remote_api_stub.ConfigureRemoteApiForOAuth('localhost:51805','/_ah/remote_api', False)
note, the port changes every time, the server gets restarted.
Answering instead of commenting as I cannot post a comment with my reputation -
Similar things have happened to me, when running these types of scripts on mac. Sometimes, your PATH variable gets confused as to which files to actually check for functions, especially when you have gcloud installed alongside the app engine launcher. If on mac, I would suggest editing/opening your ~/.bash_profile file to fix this (or possible ~/.bashrc, if on linux). For example, on my Mac I have the following lines to fix my PATH variable:
export PATH="/usr/local/bin:$PATH"
export PYTHONPATH="/usr/local/google_appengine:$PYTHONPATH
These basically make sure the python / command line will look in /usr/local/bin (or /usr/local/google_appengine in the case of the PYTHONPATH line) BEFORE anything in the PATH (or PYTHONPATH).
The PATH variable is where the command line checks for python files when you type them into the prompt. The PYTHONPATH is where your python files find the modules to load at runtime.
Related
I have a Jar (we'll call it a.jar) with a resource in it at path foo/bar.txt and a function as follows:
object FooBarLoader {
fun loadFooBarText() = javaClass.getResourceAsStream("foo//bar.txt")
?.bufferedReader()
?.readLines()
?.joinToString("\n")
}
When I test the function in a unit test (JUnit 4, running with Gradle 6), it loads the text from the resource file despite the obvious typo (the // in the middle of the resource path).
I also have a CLI application (in b.jar) that has a dependency on a.jar. When the CLI application calls loadFooBarText(), it got a null result due to the resource not being found. This was fixed by fixing the typo (// -> /) in the function in a.jar. No other changes were needed to fix it.
So, my question is why did the wrong path work in one situation (unit tests of a.jar) and not the other (call from b.jar)?
How do you run the unit test with a.jar ? Just run it in your IDE or use command java -jar a.jar ?
If you ran it just in IDE,I think difference is the search path between local files and zip files .
Your first application searches the file in your target directory and the second application searches it in the jar which is a compressed file.
When searching files in local path, command will be changed to right one by system.
The two commands below are the same in both Windows/Linux.
cd work//abc/ddd
cd work/abc/ddd
But when searching files in a jar file which is actually compressed zip file, path should be a restrict written or else the program will find nothing.
I'm using hydra to log hyperparameters of experiments.
#hydra.main(config_name="config", config_path="../conf")
def evaluate_experiment(cfg: DictConfig) -> None:
print(OmegaConf.to_yaml(cfg))
...
Sometimes I want to do a dry run to check something. For this I don't need any saved parameters, so I'm wondering how I can disable the savings to the filesystem completely in this case?
The answer from Omry Yadan works well if you want to solve this using the CLI. However, you can also add these flags to your config file such that you don't have to type them every time you run your script. If you want to go this route, make sure you add the following items in your root config file:
defaults:
- _self_
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
hydra:
output_subdir: null
run:
dir: .
There is an enhancement request aimed at Hydra 1.1 to support disabling working directory management.
Working directory management is doing many things:
Creating a working directory for the run
Changing the working directory to the created dir.
There are other related features:
Saving log files
Saving files like config.yaml and hydra.yaml into .hydra in the working directory.
Different features has different ways to disable them:
To prevent the creation of a working directory, you can override hydra.run.dir to ..
To prevent saving the files into .hydra, override hydra.output_subdir to null.
To prevent the creation of logging files, you can disable logging output of hydra/hydra_logging and hydra/job_logging, see this.
A complete example might look like:
$ python foo.py hydra.run.dir=. hydra.output_subdir=null hydra/job_logging=disabled hydra/hydra_logging=disabled
Note that as always you can also override those config values through your config file.
I'm trying to create snap package of a Qt/QML application, the application is packaged well, when I try to run it I get /snap/swipe-app/x2/bin/qt5-launch: 74: exec: application: not found error.
here's my snapcraft.yaml file
name: swipe-app # you probably want to 'snapcraft register <name>'
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
description: description
grade: devel # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots
apps:
swipe-app:
command: qt5-launch application
plugs:
- unity7
- home
parts:
application:
# See 'snapcraft plugins'
plugin: qmake
project-files: ["snap.pro"]
source: .
build-packages:
- qtbase5-dev
stage-packages:
# Here for the plugins-- they're not linked in automatically.
- libqt5gui5
after: [qt5conf] # A wiki part
As you have told the launch script that your program is called application then it will try to execute application from the current working directory when you run your snap. There are two things to note here:
The working directory is preserved from the terminal outside the snap context. For example if you are in your home directory /home/your-user then the working directory for swipe-app will also be /home/your-user.
As the working directory above is your home directory then commands without any anchor, such as application, will try to execute in your home directory. So in your example the launch script will attempt to run the command equivalent of /home/your-user/application.
You can fix this by either ensuring that the launch script executes a cd to change the working directory, e.g. cd $SNAP; or anchor your command by adding an achor, e.g. command: qt5-launch $SNAP/application.
Another thing you might need to check is that your qmake build actually outputs a binary called application. If you have not set TARGET= in your snap.pro project file then the binary will default to being called snap, not application. The line should read TARGET=application to make a binary called application: (ref: https://doc.qt.io/qt-5/qmake-variable-reference.html#target).
I know you can open files from Symfony profiler or exception file links using this in project/app/config.yml :
framework:
ide: "phpstorm://open?file=%%f&line=%%l"
More info: http://developer.happyr.com/open-files-in-phpstorm-from-you-symfony-application
However as I'm using vagrant, the file path of the server doesn't match my host.
I have created a PHP web application server in PHPStorm with the propper path mappings, but still doesn't work.
Any ideas?
Thanks
When running your app in a container or in a virtual machine, you can tell Symfony to map files from the guest to the host by changing their prefix. This map should be specified at the end of the URL template, using & and > as guest-to-host separators:
// /path/to/guest/.../file will be opened
// as /path/to/host/.../file on the host
// as /path/to/host/.../file on the host
'phpstorm://%f:%l&/path/to/guest/>/path/to/host/&/foo/>/bar/&...'
Symfony FrameworkBundle Configuration - IDE
The answer given by Jeffry no longer works unfortunately :(. When In configure that with my paths the profiler throws:
ParameterNotFoundException
You have requested a non-existent parameter "f:".
I have configured the path according to this line in the SF docs: This map should be specified at the end of the URL template, which results in this:
phpstorm://open?url=file://%%f&line=%%l&/path/to/guest/>/path/to/host/
However, it does open PHPStorm, but phpstorm does not open the file, so i'm a bit stuck here now.
This solves the issue with the file not opening in PhpStorm from a Vagrant:
phpstorm://open?file=%%f&line=%%l&/path/to/guest/>/path/to/host/
Source: https://youtrack.jetbrains.com/issue/IDEA-65879
I'm using Symfony 2 and have this row in my parameters.ini:
database_driver = pdo_pgsql
When I was creating database structure with Doctrine everything was good. But if I want to add some doctrine object to my darabase (insert row), I catch an exception:
What I have to do with this?
Are you sure you're using pdo_pgsql? Are you running on localhost? It might be very certain that you are using pdo_mysql driver instead.
However you have to check the following:
php.ini
extension=pdo.so
extension=pdo_mysql.so
or in your case
extension=pdo.so
extension=pdo_pgsql.so
You can check the phpinfo(); to find out the configured database driver.
In your symfony project you have to check the parameters.ini file in config folder. E.g.
[parameters]
database_driver="pdo_mysql"
database_host="localhost"
Besides try to avoid this error
'stty' is not recognized as an internal or external command,
operable program or batch file.
https://github.com/symfony/symfony/issues/4974
First of all, verify your php.ini file: the extensions php_pdo_pgsql and php_pdo must be enabled. Make sure you apply this changes on php.ini file that your symfony project is using, check this on localhost/path_to_your_project/web/config.php. You know if this extensions are enabled executing the function phpinfo().
This command is also helpfull: php -m. It lists on console all the php modules that are loaded.
Tip: check out you Apache error log, there could be something wrong with the load of your extensions. This file is located according to your server configuration.