howto deploy application on dotcloud - web-deployment

I want to deploy a web.py application on dotcloud.
I followed the tutorial, http://docs.dotcloud.com
I idd this:
$ dotcloud create myapp
$ dotcloud push myapp .
after a long time waiting,i got this message:
2011-06-28 18:59:12 [api] Waiting for the build. (It may take a few minutes)
........................................................................Cannot reach DotCloud service ("[Errno -2] Name or service not known").
Please check the connectivity and try again.
Any ideas?

Please check the connectivity and try again.
There isn't really much else to do. You don't get any errors, the build system isn't broken (works for me), dotcloud isn't down.

There is problem with DNS resolution.
If you are behind a proxy server then you'll have to configure your system such that the dotcloud app makes connections through your proxy.

Related

Stripe CLI on DEBIAN server, how do I make it "listen" to new requests in the background and continue using the console?

I want to use Stripe CLI and WEBHOOKS events on my debian(10.1) server. I've managed to get everything working but my problem is that when I run:
stripe listen --forward-to localhost:8000/foo/webhooks/stripe/
that I can't use the console anymore, because its listening to incoming events, which I still need. The only shown option is ^C to quit, but I need the CLI listener to continue to run at all times while being able to do other stuff at the same time.
On my local version/editor I can add sessions and run the listen command from one terminal and use another terminal session to continue interact with the system. But I dont know how to do that with debian yet. It would be great if the listen function could just run in the background and I could continue with what I need to do without stopping to listen. My next idea was to tunnel via ssh to the server but im unsure if that would solve my problem. Wouldnt that mean that my computer at home running that session would need to be running at all time? Im lost here...
Btw the server is a droplet on Digital Ocean if that matters...which I dont think.
Please let me know if anything is unclear.
UPDATE/SOLVED:
I missunderstood my problem, the Stripe CLI is just for testing local. Once a stripe solution is in production, Stripes servers send requests to my server/endpoints.
If you are wondering about this or want to read more start here how it works in production: https://stripe.com/docs/webhooks/go-live

puppet client reporting to wrong host in Foreman

This is my first post!
I have 100's of nodes managed by puppet/foreman. Everything is fine.
I did something I already did without problem in the past:
Change the hostname of a server.
This time I changed 2 hostnames:
Initially I had 'gate02' and 'gate03'.
I moved gate02 to 'gate02old' (with dummy IP, and switched the server OFF)
then I moved gate03 to gate02 ...
Now (the new) gate02 reports are updating the host called gate02old in foreman.
I did clean the certs in the puppetserver. I rm the ssl dir in the (new) gate02 and run puppet agent. I did not fing any reference to 'gate' in /var/lib/puppet. I changed the certname in puppet.conf and in hostname, and in sysconfig/network-script/ifcfg-xxxx.
The puppet agent run smoothly, and sends it to the puppetserver. But it updates the wrong host!
Anyone would have a clue on how to fix this ?
Thanks!
Foreman 2.0.3
Puppet 6
I do not accept that the sequence of events described led to the behavior described. If reports for the former gate03, now named gate02, are being logged on the server for name gate02old, then that is because that machine is presenting a cert to the server that identifies it as gate02old (and the server is accepting that cert). The sequence of events presented does not explain how that might be, but my first guess would be that it is actually (new) gate02old that is running and requesting catalogs from the server, not (new) gate02.
Fix it by
Ensuring that the machine you want running is in fact the one that is running, and that its hostname is in fact what you intend for it to be.
Shutting down the agent on (new) gate02. If it is running in daemon mode then shut down the daemon and disable it. If it is being scheduled by an external scheduler then stop and disable it there. Consider also using puppet agent --disable.
Deactivating the node on the server and cleaning out its data, including certs:
puppet node deactivate gate02
puppet node deactivate gate02old
puppet node deactivate gate03
You may want to wait a bit at this point, then ...
puppet node clean gate02
puppet node clean gate02old
puppet node clean gate03
Cleaning out the nodes' certs. For safety, I would do this on both nodes. Removing /opt/puppetlabs/puppet/ssl (on the nodes, not the server!) should work for that, or you could remove the puppet-agent package altogether, remove any files left behind, and then reinstall.
Updating the puppet configuration on the new gate02 as appropriate.
Re-enabling the agent on gate02, and starting it or running it in --test mode.
Signing the new CSR (unless you have autosigning enabled), which should have been issued for gate02 or whatever certname is explicitly specified in in that node's puppet configuration.
Thanks for the answer, though it was not the right one.
I did get to the right point by changing again the hostname of the old gateold02 to a another existing-for-testing one, starting the server and get it back in Foreman. Once that done, removing (again!) the certs of the new gate02 put it right, and its reports now updates the right entry in Foreman.
I still beleive there is something (a db ?) that was not updated right so foreman was sure that the host called 'gate02' was in the GUI 'gateold02'.
I am very sorry if you don't beleive me.
Not to say rather disappointed.
Cheers.

Jupyter Hub the littlest : The server is shutting down often

I have installed Jupiter-hub-the-littlest with the tutorial on a google cloud compute engine. However, when I work for more than 1 or 2 hours on it, the server is shutting down and I have to reconnect and relaunch the server. Is there an option that I should set up to avoid this. I don't see anything very clear in the logs. Thanks for your help

Unit cinder-api.service could not be found - Openstack

I installed openstack devstack on ubuntu 18.04
when I login to horizon. it logs in.
when I try to check status of all services. Services are not found.
So when I execute service cinder-api status it gives me Unit cinder-api.service could not be found. (Same for all services. Installation was successful. although it was interrupted due to network. But is that issue with this (But horizon dashboard opens))
What is wrong with this. As because of this I am not able to create volumes (I guess).
Reason for the message:
The services of the projects are down. For example - n-api, c-api, c-sch..... To rectify it, manually trigger the services using command prompt or Run unstack.sh and again clean.sh and then stack.sh, this will create a new openstack.
Note: While using devstack one important thing need to be taken care is "Never shutdown the machine. If it is shutdown all the services goes down."

Is it bad to always run nginx-debug?

I'm reading the debugging section of NGINX and it says to turn on debugging, you have to compile or start nginx a certain way and then change a config option. I don't understand why this is a two step process and I'm inferring that it means, "you don't want to run nginx in debug mode for long, even if you're not logging debug messages because it's bad".
Since the config option (error_log) already sets the logging level, couldn't I just always compile/run in debug mode and change the config when I want to see the debug level logs? What are the downsides to this? Will nginx run slower if I compile/start it in debug mode even if I'm not logging debug messages?
First off, to run nginx in debug you need to run the nginx-debug binary, not the normal nginx, as described in the nginx docs. If you don't do that, it won't mater if you set the error_log to debug, as it won't work.
If you want to find out WHY it is a 2 step process, I can't tell you why exactly the decision was made to do so.
Debug spits out a lot of logs, fd info and so much more, so yes it can slow down your system for example as it has to write all that logs. On a dev server, that is fine, on a production server with hundreds or thousands of requests, you can see how the disk I/O generated by that log can cause the server to slow down and other services to get stuck waiting on some free disk I/O. Also, disk space can run out quickly too.
Also, what would be the reason to run always in debug mode ? Is there anything special you are looking for in those logs ? I guess i'm trying to figure out why would you want it.
And it's maybe worth mentioning that if you do want to run debug in production, at least use the debug_connection directive and log only certain IPs.

Resources