Our web service developer is getting a "DeliveryPoint=Unknown Failed: error on connection with 17.172.233.147:2195: Use of closed network connection. Will retry in 20s" on my Push Notification Service using Uniqush. I cannot seem to detect which is the problem. Is this a problem with the pem/p12 files or the server cannot reach the Apple gateway? I am not very experienced in push notifications. What could be the common problems if the problem is the pem/p12 files? He is saying that it's the pem/p12 files causing the error. I suspect it is the server where the push notification service is being ran. I tested my pem/p12 files on Pushbots, a popular push notification service and I can receive push notifications with it. So it is a strong guess that the problem is on the push noticication server side.
I also cannot use the tag "Uniqush" here on stackoverflow, it doesn't seem popular. I hope I get help here.
I see this is a bit old, but i just had that very same problem.
First, you need to check your certificate and key to be sure to have them both in .pem format, and remove the key password/encription before using them with uniqush.
For detailed instructions go to this blog post and do steps 4 and 5 only http://blog.boxedice.com/2010/06/05/how-to-renew-your-apple-push-notification-push-ssl-certificate/
(steps 1-3 are for renewing your certificate, step 6 is for merging certificate and key which isn't needed for uniqush)
If you've already done this, maybe the problem aren't the certificates, but a problem in the setup of uniqush.
If this is happening only in Ad-hoc/production environment, after checking that you are using the correct certificates (Ad-hoc/production uses different APNS certs than development), be sure not to include the -sandbox=true when adding the PSP (Push Service Provider) to uniqush.
If the problem persists, maybe yours is the one in this issue https://github.com/uniqush/uniqush-push/issues/47 (my case), so you should try the following:
Download the latest version of uniqush (1.5.2a5 right now) from http://uniqush.org/downloads/uniqush-push_1.5.2a5_x86_64.tar.gz
Extract the downloaded file with command tar zxvf <downloaded_file>
Kill uniqush-push process, either using kill -9 <PID> or killall uniqush-push (you will probably need sudo for both)
Copy/move uniqush-push from extracted folder to /usr/bin/uniqush-push (assuming you have it installed here in the first place)
Run uniqush again with uniqush-push & or with a script or something (if you have one already)
Maybe you'll have to re-register the devices that weren't receiving push notifications.
This worked for me, so i hope it helps you.
Related
We have a Multitrack web conference implementation using AMS 2.4.1 version. Its working great for our use case, except in one scenario. When there are N (< 3) number of users and they on there camera simultaneously, then few remote users are not rendered as we don't receive the video tracks for those users in newStreamAvailable. We only receive the audio track for those users. We are able to reproduce this quite frequently.
As a backup, I am trying to poll AMS using getTrackList with the main track Id to get all available streams, but I am not getting any message trackList
var jsCmd =
{
command : "getTrackList",
streamId : streamId, // this is roomId or main track id
token : token
}
Any insight would be helpful.
Thanks,
We were able to resolve the issue, posting here to help anyone who might be facing a similar issue.
With push notifications from the server, we might encounter issues when for some reason push operation doesn't succeed. In that case, it's better to have a backup plan to pull from the server and sync.
The Ant Media Server suggests pulling the server periodically for the room info. The server will respond with active streams and the application should synchronize.
For reference, please refer to following link https://resources.antmedia.io/docs/webrtc-websocket-messaging-reference
Sometimes when we open folder, Alfresco shows spinning wheel and never opens the folder. The log has below exception.
2016-03-08 11:45:40,652 INFO [webscripts.connector.RemoteClient] [http-bio-8080-exec-494] Exception calling (GET) http://localhost:8080/alfresco/s/slingshot/doclib/treenode/site/test/documentLibrary/Books/science?children=true&max=-1&alf_ticket=TICKET_400a73c20348346eed011695af270f837f27a654
2016-03-08 11:45:40,652 INFO [webscripts.connector.RemoteClient] [http-bio-8080-exec-494] Error status 500 null
ClientAbortException: java.net.SocketException: Connection reset
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:413)
If I curl the above URL or open directly in webrowser I am able to get the json response successfully.
I am using only Alfresco Share and not anyother client. The localhost:8080 is working perfectly fine in most of the cases except this one.
Can anyone please tell me what is the issue and why connection is closed or ClientAbortException exception is occurring?
Mostly this is an issue of timeout and you'll need active monitoring on your Alfresco & Share environment to see how Alfresco is running.
Easy check is to install some java monitoring or use Jmeter to load test the system and see how it responds on different load.
Mostly the outcome is more CPU/RAM for Alfresco :).
As Tahir Malik mentioned above, the issue is related to performance.
The ClientAbort error itself occurs when the client (in this case, Share) times out or the user cancels a download. The message on the log is type INFO. More details here: https://issues.alfresco.com/jira/browse/ALF-20349
If you are on SSO and using Alfresco Enterprise 5.2.3 or 5.2.4, there is a chance that you may hit a similar bug, which is discussed in the Alfresco Forum. However, this particular bug would not show the ClientAbortException.
Is there anything out there to monitor SaltStack installations besides halite? I have it installed but it's not really what we are looking for.
It would be nice if we could have a web gui or even a daily email that showed the status of all the minions. I'm pretty handy with scripting but I don't know what to script.
Anybody have any ideas?
In case by monitoring you mean operating salt, you can try one of the following:
SaltStack Enterprise GUI
Foreman
SaltPad
Molten
Halite (DEPRECATED by SaltStack)
These GUI will allow you more than just knowing whether or not minions are alive. They will allow you to operate on them in the same manner you could with the salt client.
And in case by monitoring you mean just whether the salt master and salt minions are up and running, you can use a general-purpose monitoring solutions like:
Icinga
Naemon
Nagios
Shinken
Sensu
In fact, these tools can monitor different services on the hosts they know about. The host can be any machine that has an ip address and the service can be any resource that can be queried via the underlying OS. Example of host can be a server, router, printer... And example of service can be memory, disk, a process, ...
Not an absolute answer, but we're developing saltpad, which is a replacement and improvement of halite. One of its feature is to display the status of all your minions. You can give it a try: Saltpad Project page on Github
You might look into consul while it isn't specifically for SaltStack, I use it to monitor that salt-master and salt-minion are running on the hosts they should be.
Another simple test would be to run something like:
salt --output=json '*' test.ping
And compare between different runs. It's not amazing monitoring, but at least shows your minions are up and communicating with your master.
Another option might be to use the salt.runners.manage functions, which comes with a status function.
In order to print the status of all known salt minions you can run this on your salt master:
salt-run manage.status
salt-run manage.status tgt="webservers" expr_form="nodegroup"
I had to write my own. To my knowledge, there is nothing out there which will do this, and halite didn't work for what I needed.
If you know Python, it's fairly easy to write an application to monitor salt. For example, my app had a thread which refreshed the list of hosts from the salt keys from time to time, and a few threads that ran various commands against that list to verify they were up. The monitor threads updated a dictionary with a timestamp and success/fail for each host after they ran. It had a hacked together HTML display color coded to reflect the status of each node. Took me a about half a day to write it.
If you don't want to use Python, you could, painfully, do something similar to this inefficient, quick, untested hack using command line tools in bash.
minion_list=$(salt-key --out=txt|grep '^minions_pre:.*'|tr ',' ' ') # You'
for minion in ${minion_list}; do
salt "${minion}" test.ping
if [ $? -ne 0 ]; then
echo "${minion} is down."
fi
done
It would be easy enough to modify to write file or send an alert.
halite was depreciated in favour of paid ui version, sad, but true - still saltstack does the job. I'd just guess your best monitoring will be the one you can write yourself, happily there's a salt-api project (which I believe was part of halite, not sure about this), I'd recommend you to use this one with tornado as it's better than cherry version.
So if you want nice interface you might want to work with api once you set it up... when setting up tornado make sure you're ok with authentication (i had some trouble in here), here's how you can check it:
Using Postman/Curl/whatever:
check if api is alive:
- no post data (just see if api is alive)
- get request http://masterip:8000/
login (you'll need to take token returned from here to do most operations):
- post to http://masterip:8000/login
- (x-www-form-urlencoded data in postman), raw:
username:yourUsername
password:yourPassword
eauth:pam
im using pam so I have a user with yourUsername and yourPassword added on my master server (as a regular user, that's how pam's working)
get minions, http://masterip:8000/minions (you'll need to post token from login operation),
get all jobs, http://masterip:8000/jobs (you'll n need to post token from login operation),
So basically if you want to do anything with saltstack monitoring just play with that salt-api & get what you want, saltstack has output formatters so you could get all data even as a json (if your frontend is javascript like) - it lets you run cmd's or whatever you want and the monitoring is left to you (unless you switch from the community to pro versions) or unless you want to use mentioned saltpad (which, sorry guys, have been last updated a year ago according to repo).
btw. you might need to change that 8000 port to something else depending on version of saltstack/tornado/config.
Basically if you want to have an output where you can check the status of all the minions then you can run a command like
salt '*' test.ping
salt --output=json '*' test.ping #To get output in Json Format
salt manage.up # Returns all minions status
Or else if you want to visualize the same with a Dashboard then you can see some of the available options like Foreman, SaltPad etc.
The server has only 64MB of memory. I'm trying to push a huge git repository to it. Initially the target directory contains an empty bare repository. The push fails:
$ git push server:/tmp/repo master
Counting objects: 3064514, done.
Compressing objects: 100% (470245/470245), done.
fatal: Out of memory, calloc failed
error: pack-objects died of signal 13
error: failed to push some refs to 'server:/tmp/repo'
$ ssh server cat /tmp/repo.git/config
[pack]
threads = 1
deltaCacheSize = 8m
windowMemory = 32m
[core]
repositoryformatversion = 0
filemode = true
bare = true
I get the same error message after changing git config pack.windowMemory 16m on the server.
The same push succeeds to localhost:
$ git push 127.0.0.1:/tmp/repo master
Password:
Counting objects: 3064514, done.
Compressing objects: 100% (470245/470245), done.
Writing objects: 100% (3064514/3064514), 703.02 MiB | 10.84 MiB/s, done.
Total 3064514 (delta 2569775), reused 3059081 (delta 2565342)
To 127.0.0.1:/tmp/repo
* [new branch] master -> master
Is there a remote git config setting which can make the push succeed? Or do I have to repack the repo locally before pushing (with what settings)?
Please note that using a different server with more memory is not an option. Adding memory to the existing server is an option, up to 96MB. It's OK for me to use more disk space than usual on the server if the memory limit is met.
Similar question without a working solution: https://serverfault.com/questions/372899/git-fails-to-push-with-error-out-of-memory
Repacking the repository locally didn't help, git push prints the same error. Repack settings in the local repo:
git config core.packedgitlimit 32m
git config core.packedgitwindowsize=32m
git config pack.threads 1
git config pack.deltacachesize 8m
git config pack.windowmemory 32m
git config pack.packsizelimit 500m
My idea is that the reason why it fails is that the total number of objects is too large: even the SHA-1 hashes won't fit (20 * 3064514 bytes is almost 64MB).
Possible other causes
As #torek pointed out in his comment, this may not be an indication of the server running out of memory, but an indication that something is going wrong locally. Perhaps something changed between when you were pushing to server and to local host that freed up memory on your local machine?
It's also plausible that git is figuring out that you're pushing to localhost, and bypassing the "Git aware" transport mechanism and/or using hardlinks, which might reduce the memory needed. I don't see any indication in the docs that it WOULD do this, and I'm not sure off the top of my head how you could test this, or force it not to do that, but it's a possibility.
Another possible issue is that the host.xz:path/to/repo.git/ url syntax is only recognized if there are no slashes before the first colon, so depending on what server is, that could be causing problems.
If none of these are the case, and the problem is in fact that it's running out of memory on the server, you might have a few options here, depending on the circumstances. I don't know if any of these will work, but they're worth a try.
Solution 1: don't push all the commits at once
I'm assuming you've got many commits in the commit history of master. Try pushing them in stages. E.g.
git push server:/tmp/repo master~500
git push server:/tmp/repo master~400
git push server:/tmp/repo master~300
git push server:/tmp/repo master~200
git push server:/tmp/repo master~100
git push server:/tmp/repo master
Solution 2: Push individual objects one at a time
This is going to be incredibly tedious and DEFINITELY need to be automated/scripted on your local machine. However, you don't actually need to push whole commits all at once.
Instead, you can push individual objects one at a time as long as you push them to a tag ref instead of a branch ref. E.g. if we were working with https://github.com/llvm/llvm-project and wanted to push the tree object 0082ee0b3ad78ff55b2a3a65ef5bfdb8cd9713a1 from it (this is the tree object pointed to by commit faf5e0ec737a676088649d7c13cb50f3f91a703a), we could do git push server:/tmp/repo 0082ee0b3ad78ff55b2a3a65ef5bfdb8cd9713a1:refs/tags/test. Using this we can push individual objects one at a time, starting with blobs, then the tree objects, then finally commit objects. We'd end up with a TON of tags to clean up later, but I'll leave that to you to figure out.
For the rest of these solutions, I'm working under a couple assumptions:
Given the limitations you described, and the way you specified the url as server:/tmp/repo instead of something ending with .git, I'm assuming this remote repository isn't going to be managed with any service like github or gitlab, which should give you a little more room to use some unconventional techniques.
I'm also assuming you probably have the ability to log on to/run
commands on the server.
If either of these are not the case, and the above didn't work, I'm out of ideas at the moment.
Solution 3: backwards push using fetch or clone
There's actually nothing special about a server, it's just another git repository that you can trade commits with. The only difference is that a server is usually hosting what's called a bare repository: it doesn't typically keep a working tree of it's own (in other words, it only keeps the contents of the .git folder).
So, try performing the push in reverse using fetch/clone from server:
Push to a third, intermediate server (let's call it server2). Ideally, one with a lot more performance, like a github hosted repo.
Log onto/ssh into server, and from there, clone the repo into /tmp/repo: git clone --bare git#github.com:path/to/your/repo.git.
I would be surprised if this solved anything on it's own, but it's worth trying and step 1 will still set us up for solution 4 and 5. If by chance it does work, you can tidy up by removing server2 as a remote on server: git remote remove origin, then setting up your remotes on your local machine to point towards server instead of server2.
Solution 4: backwards push, but without fetching all the commits at once
Like solution 3, push to an intermediate server, but this time, instead of using clone and fetching everything all at once, fetch the commits in stages:
Log onto/ssh into server, and from there, initialize /tmp/repo as a bare repo:
cd /tmp/repo
git init --bare
git remote add origin git#github.com:path/to/your/repo.git
Still on server, fetch commits one at a time:
git fetch origin 569d84fe99e63e830ea036598f7fa7a5f9899d7c
git fetch origin 9aaba9d9bb4fc3648a9417820858086b14b6b73e
git fetch origin faf5e0ec737a676088649d7c13cb50f3f91a703a
Solution 5: backwards push, but using partial and/or shallow clones
Instead of fetching individual commits, we can use partial and or shallow clones to restrict how much we are fetching at once. There is a good write-up explaining what those are on the github blog: https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/. This time, we can't won't use a bare repository. We want to be able to check out commits to fill in the missing objects later. You can follow the instructions here to convert it to a bare repository when you're done. Alternatively, instead of using a regular (non-bare) repository, explicitly fetching the objects might also work, but I don't know for sure off the top of my head.
I think everything I've already written, combined with that write-up should give you all the pieces you need to figure out how to do this. I've already spent hours writing this up, it's late, this solution's kind of complicated, and it's an esoteric question that hasn't been touched in years. If somebody comes across this and needs a more complete answer for this, leave a comment and I'll fill it in, but this is as far as I'm willing to go right now for some potential internet points if nobody actually needs this answer XD.
I am getting the following exception when trying to run the MakeCall example code:
com.skype.NotAttachedException
at com.skype.Utils.convertToSkypeException(Utils.java:36)
at com.skype.Skype.setDebug(Skype.java:116)
at com.skype.sample.MakeCall.main(MakeCall.java:26)
Caused by: com.skype.connector.NotAttachedException
at com.skype.connector.Connector.assureAttached(Connector.java:580)
at com.skype.connector.Connector.addConnectorListener(Connector.java:604)
at com.skype.connector.Connector.addConnectorListener(Connector.java:591)
at com.skype.connector.Connector.setDebug(Connector.java:209)
at com.skype.Skype.setDebug(Skype.java:114)
... 1 more
Now, I have not provided any sort of API credentials, so I kind of expect it to fail. My question then, is how do I provide whatever credentials necessary to attach my connector? The documentation on Skype4Java seems pretty slim.
After not getting any tips here, I have cross-posted this question on the Skype community forum as well.
Had same thing, went on Skype->Options->Advance->Manage Other Programs access to Skype...selected
Java.exe from the list and set option to allow. Working perfect now.
If you have the dbus dependency installed, then make sure to start skype on the command line with the "--use-system-dbus".
https://developer.skype.com/Docs/ApiDoc/src#Linux
Once skype is running and you then start the java program, you'll be prompted to allow your java program to access skype.