Sorry for asking the nth permutation of this question, but i'm stymied.
I'm running GAE for python2.5 on OS X, and i'm losing all data between reboots. From what I understand from related SO posts, the default location for local datastore file is wiped with each reboot. I have tried changing the location to a central /datastores directory with:
dev_appserver.py --datastore_path=/Users/Me/gae_apps/datastores /Users/Me/gae_apps/app_1
which doesn't generate an error, but when i fire up dev_appserver.py after rebooting, I see this output, and the data is again wiped:
WARNING 2011-07-14 17:50:56,297 urlfetch_stub.py:108] No ssl package found. urlfetch will not be able to validate SSL certificates.
INFO 2011-07-14 17:50:57,653 appengine_rpc.py:159] Server: appengine.google.com
INFO 2011-07-14 17:50:57,722 appcfg.py:453] Checking for updates to the SDK.
INFO 2011-07-14 17:50:58,448 appcfg.py:470] The SDK is up to date.
WARNING 2011-07-14 17:50:58,448 datastore_file_stub.py:511] Could not read datastore data from /var/folders/ps/psEgjl3fF+C5hecCKN2AW++++TI/-Tmp-/dev_appserver.datastore
INFO 2011-07-14 17:50:58,486 rdbms_sqlite.py:58] Connecting to SQLite database '' with file '/var/folders/ps/psEgjl3fF+C5hecCKN2AW++++TI/-Tmp-/dev_appserver.rdbms'
WARNING 2011-07-14 17:50:58,521 dev_appserver.py:4700] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging
INFO 2011-07-14 17:50:58,689 dev_appserver_multiprocess.py:637] Running application portfolio on port 8080: http://localhost:8080
I should mention that i have several apps, all of which sit in separate directories under /Users/Me/gae_apps/
Not sure if this is related to the failure to read dev_appserver.datastore and subsequent switch to SQLite or not.
Any help would be greatly appreciated. thanks!
Doing this "*--blobstore_path=/Users/me/Documents/workspace/app-name/ --datastore_path=/Users/me/Documents/workspace/app-name/datastore.rbm*", is working for me, on OS X.
I had this problem in Linux for one of the versions of GAE. What I did then was run dev_appserver.py without specifying datastore_path. I then locate dev_appserver.datastore and/or dev_appserver.rdbms (I forget which now) which was in /tmp in Linux. I then copied both these files to my ~/gae/datastore/.
After that when I ran dev_appserver.py with --datastore_path it worked without any issues.
Not sure if it will work on OS X but its worth a shot.
Possibly a sledgehammer to crack a nut here but if the patches are no help you can script the AppEngine and force it to use a different path on startup.
Have you tried inserting any data in the datastore after starting up the server with the new datastore location? When I don't insert any new data, I get the error you mention: Could not read datastore data from ....
However, when I start up my app, register, and then restart the app, I get no errors, and the new datastore location is used.
Maybe I'm misreading your question and you are inserting data after the restart. In that case, I can't reproduce your issue.
I have been struggling with this issue for a very long time. This finally worked, thanks to the answer by said-omar.
Simply add this as your flag:
--datastore_path=/Users/me/Documents/workspace/app-name/datastore.rbm
..obviously changing everything before "/datastore.rbm" to point to the directory where you want the database stored.
Related
I have an experiment in AzureML which has a R module at its core. Additionally, I have some .RData files stored in Azure blob storage. The blob container is set as private (no anonymous access).
Now, I am trying to make a https call from inside the R script to the azure blob storage container in order to download some files. I am using the httr package's GET() function and properly set up the url, authentication etc...The code works in R on my local machine but the same code gives me the following error when called from inside the R module in the experiment
error:1411809D:SSL routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat list
Apparently this is an error from the underlying OpenSSL library (which got fixed a while ago). Some suggested workarounds I found here were to set sslversion = 3 and ssl_verifypeer = 1, or turn off verification ssl_verifypeer = 0. Both of these approaches returned the same error.
I am guessing that this has something to do with the internal Azure certificate / validation...? Or maybe I am missing or overseeing something?
Any help or ideas would be greatly appreciated. Thanks in advance.
Regards
After a while, an answer came back from the support team, so I am going to post the relevant part as an answer here for anyone who lands here with the same problem.
"This is a known issue. The container (a sandbox technology known as "drawbridge" running on top of Azure PaaS VM) executing the Execute R module doesn't support outbound HTTPS traffic. Please try to switch to HTTP and that should work."
As well as that a solution is on the way :
"We are actively looking at how to fix this bug. "
Here is the original link as a reference.
hth
I have the following setup: riak 1.4.12, riakcs 1.5.3, stanchion 1.5.0
I am able to list bucket contents, and the authentication works (I get a response when listing or trying to remove a bucket, PUT a file) but get an AccessDenied error when trying to create a bucket.
I found this thread http://riak-users.197444.n3.nabble.com/RIAK-CS-Unable-to-create-bucket-using-s3cmd-AccessDenied-td4032375.html and tried adding signature_v2 = True to .s3cfg with no success, and I've also tried three versions of s3cmd (1.5.0, 1.5.0alpha, 1.0.1) I also tried creating a bucket using the python library boto, which also gives an access denied error.
I'm stumped :( any suggestions on where I should look next would be greatly appreciated! Not sure where there are logs for individual operations against Riak-cs - I've set lager log level to debug and wasn't able to see anything in the logs.
Thanks!
Ambert
I posted the same question to riak-users mailing list, and got an answer!
In my case, I had to set the admin.key and admin.secret in /etc/stanchion/stanchion.conf.
After setting them, s3cmd mb succeeded.
My original problem was that I want to increase my DynamoDB write throughput before I run the pipeline, and then decrease it when I'm done uploading (doing it max once a day, so I'm fine with the decreasing limitations).
They only way I found to do it is through a shell script that will issue the API commands to alter the throughput. How does it work with my AMI access_key and secret_key when it's a resource that pipeline creates for me? (I can't log in to set the ~/.aws/config file and don't really want to create an AMI just for this).
Should I write the script in bash? can I use ruby/python AWS SDK packages for example? (I prefer the latter..)
How do I pass my credentials to the script? do I have runtime variables (like #startedDate) that I can pass as arguments to the activity with my key and secret? Do I have any other way to authenticate with either the commandline tools or the SDK package?
If there is another way to solve my original problem - please let me know. I've only got to the ShellActivity solution because I couldn't find anything else in documentations/forums.
Thanks!
OK. found it - http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-roles.html
The resourceRole in the default object in your pipeline will be the one assigned to resources (Ec2Resource) that are created as a part of the pipeline activation.
The default one in configured to have all your permissions and AWS commandline and SDK packages are automatically looking for those credentials so no need to update ~/.aws/config of pass credentials manually.
On a RedHat 6 server, a third party application requires to be root to run and needs access to sqlplus. I have a running database, I can run sqlplus as user 'oracle'. When logged in as user root, 'sqlplus usr/pwd#dbname' works as expected. The trouble is that this agent needs to run sqlplus with no parameters and it always returns ORA-12546: TNS:permission denied.
I've read a dozen times that enabling root to launch Oracle is a security issue but I really have no other choice.
Running Oracle 11.2.0.1.0.
Any help will be much appreciated as I've googled for 2 days with no success.
From the documentation, ORA_12546 is:
ORA-12546: TNS:permission denied
Cause: User has insufficient privileges to perform the requested operation.
Action: Acquire necessary privileges and try again.
Which isn't entirely helpful, but various forum and blog posts (way too many to link to, Googling for the error shows a lot of similar advice) mention permissions on a particular part of the installation, $ORACLE_HOME/bin/oracle, which is a crucial and central part of most of the services.
Normally the permissions on that file would be -rws-r-s--x, with the file owned by oracle:dba, and this error can occur when the word-writable flag - the final x in that pattern - is not set. Anyone in the dba group will still be able to execute it, but those outside will not.
Your listener seems to be fine as you can connect remotely, by specifying #dbname in the connect string. The listener runs as oracle (usually, could be grid with HA, RAC or ASM) so it is in the dba group and can happily hand-off connections to an instance of the oracle executable.
When you connect without going via the listener, you have to be able to execute that file yourself. It appears that root cannot execute it (or possibly some other file, but this is usually the culprit, apparently), which implies the world-writable bit is indeed not set.
As far as I can see you have three options:
set the world-writable bit, with chmod o+x $ORACLE_HOME/bin/oracle; but that opens up the permissions for everyone, and presumably they've been restricted for a reason;
add root to the dba group, via usermod or in the /etc/group; which potentially weakens security as well;
use SQL*Net even when you don't specify #dbname in the connect string, by adding export TWO_TASK=dbname to the root environment.
You said you don't have this problem on another server, and that the file permissions are the same; in which case root might be in the dba group on that box. But I think the third option seems the simplest and safest. There is a fourth option I suppose, to install a separate instant client, but you'd have to set TWO_TASK anyway and go over SQL*Net, and you've already ruled that out.
I won't dwell on whether it's a good idea to run sqlplus (or indeed the application that needs it) as root, but will just mention that you'd could potentially have a script or function called sqlplus that switches to a less privileged account via su to run the real executable, and that might be transparent to the application. Unless you switch to the oracle account though, which is also not a good idea, you'd have the same permission issue and options.
i am a newbie to ldap and i am facing an issue while accessing openldap server with jxplorer. I am running openldap on my laptop with windows 7 OS. I have configured the openldap with the BDB database. When i try accessing the ldap server from jxplorer, i get an error that says
unable to list dc=maxcrc,dc=com.
unable to perform read entry operation.
In the slapd.conf file, the below entries are present.
database bdb
suffix "dc=maxcrc,dc=com"
rootdn "cn=Manager,dc=maxcrc,dc=com"
And also when i try adding entries in the ldap, it fails saying,
"Unable to perform modify operation".
I went through the openldap readme PDF file, but of no help.
Any suggestions
Quick help would be greatly appreaciated :) Thanks in advance
Below is the error details which i saw:
javax.naming.NameNotFoundException: [LDAP: error code 32 - No Such Object]; remaining name 'dc=maxcrc,dc=com'.