I want a proxy in Apigee to call itself every 1 minute - apigee

I have a proxy in apigee.I want that proxy to call itself every 1 minute.
Can anyone help me with simple policy/javascript policy so that it calls itself every 1 minute?
Why does google has no auto call function?

It is not a good practice to call API Proxy continuously from same proxy or from different proxy in Apigee. Doing so you may bring down Server if you are dealing with large data.
However, you can do outside of Apigee by running Jenkins job which calls your proxy every minute or creating a unix or batch script and run it as a corn job.
Shell script: On unix create a test.sh file and add curl command and any other instructions you want. Follow below article for more details on curl.
https://www.thegeekstuff.com/2012/04/curl-examples/?utm_source=feedburner
Now, Schedule it using corn job. below post describes how to do it.
https://askubuntu.com/questions/852070/automatically-run-a-command-every-5-minutes/852074
Thanks

Related

Cron scheduling a HTTP request on GCP that takes ~1 hour to complete

I know that for GCP cloud scheduler the max timeout is around 20 minutes for a HTTP request source.
Is it somehow possible, on GCP (perhaps using a different service) for me to invoke an HTTP endpoint, that takes around 65 minutes to respond, every ~6 hours?
Agreeing with the comments, it would be better if you restructure your application so that it doesn’t have to rely on such a long timeout period. This is due to the drawbacks that John Hanley commented on. As for your actual question, you could combine multiple services. For example, Cloud Run has a maximum timeout of 60 minutes, which you can set up when you deploy your service.
Now, in order to run this service every 6 hours, you can make use of Cloud Workflows. Workflows is an automation tool which can be used to combine multiple GCP services in a single automated process. It can execute Cloud Run services, and you can in turn schedule this Cloud Workflow to run every 6 hours with Cloud Scheduler.
In the end what I ended up doing is creating a micro VM instance on gcp and followed this guide to manually set up a cronjob in ubuntu:
https://www.geeksforgeeks.org/how-to-setup-cron-jobs-in-ubuntu/

HTTP response times GUI

I'm looking for an application available on CentOS, that allows me to check periodic connectivity response times between that server and a specific port of a remote server (in this case servers a SOAP API).
Something that preferentially allows me to send periodic API calls, but if not possible, just telnet's that remote port, but shows results in a graphic.
Does someone know about an application that allows this, without the need for me to create a script that writes results to a log file that is less readable in terms of time perspective?
After digging and testing a bit more, ended up using netdata:
https://www.netdata.cloud/
Awesome tool, extremely simple to use and install.

Is there a very simple graphite tutorial available somewhere?

Given that I have Graphite installed within Docker, does anyone know of a very simple graphite tutorial somewhere that shows how to feed in data, then plot the data on a graph in Graphite Webapp? I mean the very basic things and not the endless configurations and pages after pages of setting various components up.
I know there is the actual Graphite documentation but it is setup after setup after setup of the various components. It is enough to drive anyone away from using Graphite.
Given that Graphite is running within Docker, as a start I just need to know the step of feeding in data using text, display the data in Graphite Web App, and query the data back.
I suppose that you containerized and configured all the graphite components.
First, be sure that you published plaintext and pickle port if you plan to feed graphite from the local or external host. (default: 2003-2004 )
After that, according to the documentation you can perform a simple Netcat command to send metrics over TCP/UDP to carbon with the format <metric path> <metric value> <metric timestamp>
while true; do
echo "local.random.diceroll $RANDOM `date +%s`" | nc -q 1 ${SERVER} ${PORT}
done
You should see in graphite-web GUI the path local/rando/diceroll generated with a graph of random integers.
Ref: https://graphite.readthedocs.io/en/latest/feeding-carbon.html

Openstack RDO ceilometer alarm action can execute script?

Is there a possibility using the command --alarm-action 'log: //' to run any script or create a VM instances on OpenStack, for example:
Can I do something like this
$ ceilometer alarm-threshold-create --name cpu_high/\ --description 'instance running hot' --meter-name cpu_util --threshold 70.0 --comparison-operator gt --statistic avg --period 600 --evaluation-periods 3 --alarm-action './script.sh' --query resource_id=INSTANCE_ID
where --alarm-action './script.sh' launches script.sh
It's not possible for a Ceilometer action to run a script.
The OpenStack APIs have generally been designed under the assumption that the person running the client commands (a) is running them remotely, rather than on the servers themselves, and (b) is not an administrator of the system. In particular (b) means that permitting you to run arbitrary scripts on a server would be a terrible security problem, because you would first need a way to get a script installed on the server and then there would need to be a way to prevent you from trying to run, say, /sbin/reboot.
For this reason, the ceilometer action needs to be URL. You could set up a simple web server that would receive the signal from ceilometer and execute a script in response.
If you deploy resources using Heat, you can set up autoscaling groups and have ceilometer alarms trigger an autoscaling action (creating new servers or removing servers, for example).

Best way to collect logs from a remote server

I need to run some commands on some remote Solaris/Linux servers and collect their output in a log file on my local server.
Currently, I'm using a simple Expect script, residing on the local server to fire the commands on the target system. I then redirect the output of the expect script to a log file, like this,
/usr/local/bin/expect script.exp >> logfile.txt
However, this is proving to be very unreliable as the connection to the server fluctuates a lot, leading to incomplete logs and hung scripts.
Is there a better and more reliable way to go about this task?
I have implemented fedorqui's answer,
Created a (shell) script that runs the required commands on the target servers.
Deployed this script to all servers.
Executed this script via expect, from my local (central) server.
Finally collected logs individually from each server after successful completion, and processed them.
The solution has been working fine without a glitch till now.

Resources