I'm try to publish my data to ThingsBoard server i use this types of AT commands
AT+QIACT=1
OK
AT+QMTOPEN=1,"demo.thingsboard.io",1883
OK
AT+QMTCONN=1,"demo.thingsboard.io","MY_ACCESS_TOKEN",""
OK
AT+QMTPUB=1,0,0,0,"v1/devices/me/telemetry"
>{"temperature":35.00,"humidity":80.00} // MY_POST_DATA This line hanging my module
All AT commands response is ok But i finally enter MY_POST_DATA the module doesn't provide no response hanging the previous command.. and i check my ThinksBoard data never post telemetry..
Please help any one how can i fix this problem and publish MQTT server.
Step 1: Get hold of the official AT command documentation for the modem (Quectel BG96 I assume?). It should document how the AT+QMTPUB command behaves and what it expects. Everything else is just guessing. The manufacturer should provide this, and if not you should demand to get one.
...
Step 873, when you have exhausted absolutely all possible ways of getting hold of the official AT command documentation for the modem: You can try my guess that the command behaves similar to other commands that read arbitrary length user data, most notably AT+CMGS which sends SMS messages, which expect a Ctrl-Z (ascii value 26) as an end of data indicator.
+QMTPUB: 1,0,0 simply mean that BG96 has successfully published and your broker (thingsboard) have also acknowledged publication of message.
If you can't see data on broker, then please check if the topic you are publishing is correct or not.
It may happen you are publishing to another topic (or to a different PATH).
Ask 'thingsboard' for help regarding proper topic.
I am using spring-boot-1.4.0. In my project i am using sentry for logging, sometimes log events are not reflected in sentry.While browsing google about this issue i saw something called "Raven-Sentry" but it was written using Python. Is there any Raven-sentry available for spring-boot.I am using following Raven-callback but still I am unsure how to curl or create a rest endpoint which would let me know the status of sentry whether it is up or down. Please let me know for any more details even i am ready to provide a code samples if needed.
Your help should be appreciable.
As per Brett comments I have updated my question by providing Python Sentry connection test link:
Python-sentry-test
In the above link they are running the test to find out connection to sentry is successful or not. Similarly i want to check the connection to sentry is successful or not via spring-boot.Also i would like to add sentry status to health check, so that when ever my logging events are not reflected in sentry, immediately i will flip the health of sentry to down.
I have read the example of scrapy-redis but still don't quite understand how to use it.
I have run the spider named dmoz and it works well. But when I start another spider named mycrawler_redis it just got nothing.
Besides I'm quite confused about how the request queue is set. I didn't find any piece of code in the example-project which illustrate the request queue setting.
And if the spiders on different machines want to share the same request queue, how can I get it done? It seems that I should firstly make the slave machine connect to the master machine's redis, but I'm not sure which part to put the relative code in,in the spider.py or I just type it in the command line?
I'm quite new to scrapy-redis and any help would be appreciated !
If the example spider is working and your custom one isn't, there must be something that you have done wrong. Update your question with the code, including all relevant parts, so we can see what went wrong.
Besides I'm quite confused about how the request queue is set. I
didn't find any piece of code in the example-project which illustrate
the request queue setting.
As far as your spider is concerned, this is done by appropriate project settings, for example if you want FIFO:
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# Don't cleanup redis queues, allows to pause/resume crawls.
SCHEDULER_PERSIST = True
# Schedule requests using a queue (FIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderQueue'
As far as the implementation goes, queuing is done via RedisSpider which you must inherit from your spider. You can find the code for enqueuing requests here: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/scheduler.py#L73
As for the connection, you don't need to manually connect to the redis machine, you just specify the host and port information in the settings:
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
And the connection is configured in the Ä‹onnection.py: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/connection.py
The example of usage can be found in several places: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/pipelines.py#L17
Is there anything out there to monitor SaltStack installations besides halite? I have it installed but it's not really what we are looking for.
It would be nice if we could have a web gui or even a daily email that showed the status of all the minions. I'm pretty handy with scripting but I don't know what to script.
Anybody have any ideas?
In case by monitoring you mean operating salt, you can try one of the following:
SaltStack Enterprise GUI
Foreman
SaltPad
Molten
Halite (DEPRECATED by SaltStack)
These GUI will allow you more than just knowing whether or not minions are alive. They will allow you to operate on them in the same manner you could with the salt client.
And in case by monitoring you mean just whether the salt master and salt minions are up and running, you can use a general-purpose monitoring solutions like:
Icinga
Naemon
Nagios
Shinken
Sensu
In fact, these tools can monitor different services on the hosts they know about. The host can be any machine that has an ip address and the service can be any resource that can be queried via the underlying OS. Example of host can be a server, router, printer... And example of service can be memory, disk, a process, ...
Not an absolute answer, but we're developing saltpad, which is a replacement and improvement of halite. One of its feature is to display the status of all your minions. You can give it a try: Saltpad Project page on Github
You might look into consul while it isn't specifically for SaltStack, I use it to monitor that salt-master and salt-minion are running on the hosts they should be.
Another simple test would be to run something like:
salt --output=json '*' test.ping
And compare between different runs. It's not amazing monitoring, but at least shows your minions are up and communicating with your master.
Another option might be to use the salt.runners.manage functions, which comes with a status function.
In order to print the status of all known salt minions you can run this on your salt master:
salt-run manage.status
salt-run manage.status tgt="webservers" expr_form="nodegroup"
I had to write my own. To my knowledge, there is nothing out there which will do this, and halite didn't work for what I needed.
If you know Python, it's fairly easy to write an application to monitor salt. For example, my app had a thread which refreshed the list of hosts from the salt keys from time to time, and a few threads that ran various commands against that list to verify they were up. The monitor threads updated a dictionary with a timestamp and success/fail for each host after they ran. It had a hacked together HTML display color coded to reflect the status of each node. Took me a about half a day to write it.
If you don't want to use Python, you could, painfully, do something similar to this inefficient, quick, untested hack using command line tools in bash.
minion_list=$(salt-key --out=txt|grep '^minions_pre:.*'|tr ',' ' ') # You'
for minion in ${minion_list}; do
salt "${minion}" test.ping
if [ $? -ne 0 ]; then
echo "${minion} is down."
fi
done
It would be easy enough to modify to write file or send an alert.
halite was depreciated in favour of paid ui version, sad, but true - still saltstack does the job. I'd just guess your best monitoring will be the one you can write yourself, happily there's a salt-api project (which I believe was part of halite, not sure about this), I'd recommend you to use this one with tornado as it's better than cherry version.
So if you want nice interface you might want to work with api once you set it up... when setting up tornado make sure you're ok with authentication (i had some trouble in here), here's how you can check it:
Using Postman/Curl/whatever:
check if api is alive:
- no post data (just see if api is alive)
- get request http://masterip:8000/
login (you'll need to take token returned from here to do most operations):
- post to http://masterip:8000/login
- (x-www-form-urlencoded data in postman), raw:
username:yourUsername
password:yourPassword
eauth:pam
im using pam so I have a user with yourUsername and yourPassword added on my master server (as a regular user, that's how pam's working)
get minions, http://masterip:8000/minions (you'll need to post token from login operation),
get all jobs, http://masterip:8000/jobs (you'll n need to post token from login operation),
So basically if you want to do anything with saltstack monitoring just play with that salt-api & get what you want, saltstack has output formatters so you could get all data even as a json (if your frontend is javascript like) - it lets you run cmd's or whatever you want and the monitoring is left to you (unless you switch from the community to pro versions) or unless you want to use mentioned saltpad (which, sorry guys, have been last updated a year ago according to repo).
btw. you might need to change that 8000 port to something else depending on version of saltstack/tornado/config.
Basically if you want to have an output where you can check the status of all the minions then you can run a command like
salt '*' test.ping
salt --output=json '*' test.ping #To get output in Json Format
salt manage.up # Returns all minions status
Or else if you want to visualize the same with a Dashboard then you can see some of the available options like Foreman, SaltPad etc.
I am on an OPDK installation of Apigee Edge. I have a zombie API proxy, meaning I can't delete the API proxy in the UI (and usually not via MS API, either). I get the following error:
What is the best way to ensure Apigee Edge is cleared of this zombie API proxy so that I can redeploy this API proxy again?
To clean up this up, you will need to execute some manual steps:
1) check /o/{}/apiproxies from MS API call ("curl http(s)://{mgmt-host}:{port}/v1/o/{orgname}/e/{envname}/apiproxies") This will give you the actual response info that the UI is -trying- to parse
2) delete the /o/{}/apiproxies/{proxyname} using MS API call ("curl -X DELETE http(s)://:/v1/o/{orgname}/e/{envname}/apiproxies/{apiproxy_name}") Re-check step 1 to see if it is cleaned up
3) if it is clean, try your deployment again. If it succeeds, you are good.
4) if it does not, then
5) go to zookeeper (/opt/apigee//share/zookeeper) and run the CLI (./zkCli.sh)
6) find /organizations/{orgname}/environments/{envname}/apiproxies/ and see if the {apiproxy_name} is there.
7) if so, execute "[{prompt-stuff}] rmr /organization/{orgname}/environment/{envname}/apiproxies/{apiproxy_name}" in zk
8) repeat your checks above, the proxy should be all clean
Note: There a few circumstances that may require some addition steps, such as actually incorrect server configurations, or conflicting confg data.
Hope that helps.