use socketio and redis on multiple servers - nginx

I'm trying to use socketio on multiple servers, during a whole week I tried different solution, on different platform and all of them are not working.
for my project I'm using nginx load balancing with iphash to keep the clients on the same server, and to broadcast to all sockets I'm using redis socketio adapter.
I tried also these projets in order to see if its working and understand how its working, but this supposed ready to use project are not working too.
https://github.com/h4t0n/socket.io-redis-appsample
and this one
https://github.com/evilstudios/chat-example-cluster
I tried with redis on windows and on linux but no difference.
I read on different questions that we need to use "transports: ['websocket']", some persons said it need to be done on the clients some other on the clients and servers, I tried both solution and its not working.
I'm not getting an error or warnings, the events are simply not sent to the sockets on the other servers.

look it was coming from incompatibility between version or something like that.
I updated socket.io and its working
I started the development of my app around 6 months ago at that time the last version was 1.4.8 so I'm using this version now I updated it to 1.7.2 and its working.
concerning socket.io-redis I installed it just this week so I was using the last version 2.0.1
look like between 1.4.8 and 1.7.2 something important changed.

Related

ACORE API, assistance with errors and deployment

I'm having trouble with setting up ACORE API's and then having them work on a website.
Background:
Azerothcore running 3.3.5 on a debian standalone server, this has the Database, Core files and runs both the world and auth server basically a standard setup that is shown in the how-to wiki.
I also have a standalone web server, on the same subnet, but it's a separate server running linux and normal web server stuff, this has a wordpress installation with azerothcore plugin for user signup etc.
I'm trying to add the player map (https://github.com/azerothcore/playermap) and the ACORE-API set of functions (server status, arenastats, BG que and wow statistics) (https://github.com/azerothcore/acore-api)
Problem:
I understand the acore-api must be run in a container (docker or whatever) on the server, which I have done and it binds to port 3000, I can then go to the local ip:3000 and it brings up this error. (all db's etc are connecting and soap is working)
error 404 when navigating to IP:3000
I do get a few errors when running NPM install seen here: I'm not sure if they would be causing any issues or not.
screenshot of NPM errors on install
But further that, when I put say 'serverstatus' on the webserver (separate server) and configure the config.ts file I can't seem to get anything to display.
I'm not sure what I'm doing wrong but is the same scenario for all of the different functions for the acore-api
How are these meant to be installed and function? I feel I'm missing a vital step.
Likewise, with PLAYERMAP I have edited the comm_conf.php and set the realmd_id, but when loading the page, I do get the map, but the uptime is missing and no players are shown?
Could someone assist if possible?
Seems like an issue with NodeJS version. Update your NodeJS to latest LTS version 16.13.0 (https://nodejs.org)

What to use instead of Azure Web Apps to allow installation of google chrome in app environment?

I've just created a feature for our application which generates a powerpoint report from the data a given user has in our system.
In short, the server spawns an instance of google chrome using Selenium's ChromeDriver, and from there scrapes out the charts from our application running in chrome. It was done this way to ensure the charts in the report look exactly the same as they appear in the clients' browsers.
We use Azure Web Apps to host our development and production environments, and while my reporting feature works fine in local environments, it doesn't work once deployed to any other environments, because it depends on chrome being installed, and I can't get it installed in the Azure Web App sandboxed environment.
(you can see this other question of mine for a bit of a reference to where things are going wrong: PowerShell StartProcess: invalid handle )
SO
What I pretty much want to know is, if an Azure Web App environment isn't going to allow me to install google chrome, where should I look next?
It looks like using Service Fabric may allow me to install what I need appropriately (https://learn.microsoft.com/en-us/azure/app-service/choose-web-site-cloud-service-vm), but it seems like a big change to make just to be able to facilitate this small part of the feature.
Another option is to just re-architect the feature so it doesn't depend on the server spawning an instance of google chrome.. but I'd just prefer to avoid that if there's a straightforward way for me to get what I have working.
Ideally, there'd just be a way to get google chrome installed in the given environment, but I've spent a good 10 hours trying to get that to happen now, and it's not looking promising.
There's a couple of solutions which would work - depending on your code and framework dependencies.
IMO - the simplest way would be to build your code in a docker container (that runs the Selenium ChromeDriver) and deploy it either through the container features on Web Apps or run it on demand through ACI (Azure container instances) and have it create the report and drop it in Azure Storage. In a container you have a lot more options - and you have a great amount of options on how to run it. Spinning up an ACI on-demand to do the job can be done in multiple ways (e.g. from Code or through logic-apps or Powershell/Azure automation).
Here are some links on running containers in your App Service:
https://learn.microsoft.com/en-us/azure/app-service/containers/
https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image
You could start off by building and adding your code from this image: https://github.com/SeleniumHQ/docker-selenium
Other alternatives of course - you could have a VM that you can install and do what you want with on-demand - however - it'd add more management overhead and other implications to think about.
Many options - but in the regual Web App Sandbox - you're limited.
I have found myself this problem with chromedriver.exe needing a real Chrome. As I cannot install Chrome in Azure App Service I am trying a portable version of Chrome. When using the chrome webdriver I tell it where to find the chrome binary.
var options = new ChromeOptions();
options.AddArguments("headless"); // any options you need
options.BinaryLocation = "YOUR CHROME BINARY PATH HERE";
var driver = new ChromeDriver("YOUR CHROME DRIVER PATH HERE", options);
You should be able to copy the chrome portable files as no installation is required. Although it is heavy, 250 MB, because it includes the non portable version inside.
Be sure to use a Chrome version compatible with your ChromeDriver as pointed in the documentation

prevent meteor from downloading package updates

I have a meteor app on my laptop (where I do development work on the app), and I would like to be able to work with it and/or give demos of it in situations where I do not have an internet connection.
How can I prevent meteor from automatically trying to download updates to packages when I run it, so that I can run my app without issues in an "offline" situation?
Note that this is different from the client (browser) being "offline" in the sense that it can't connect to the server. In this situation, the client and server are on the same machine and the client does have access to the server. But the machine is disconnected from the internet, so that attempts to automatically download package updates will incur at least a delay, if not errors, and I'd like to prevent that.
Use METEOR_OFFLINE_CATALOG environment var for that. But I would suggest not to set it up permanently, but rather to use it once.
So if you run meteor like this: METEOR_OFFLINE_CATALOG=1 meteor it shouldn't update any packages or meteor releases.

Saltstack: network.ip_addrs is not available

I've run into an issue with Saltstack version 2014.7.0, where I cannot get network information from Salt.
If I run:
salt-call network.ip_addrs
I get:
Function network.ip_addrs is not available
This only seems to happen on some of my hosts. It seems to effect the almost all of the functions in salt.modules.network, but everything else works as expected.
I suspect there's something in my environment to blame. I am running salt within a CentOS 7 docker container. I followed these instructions to get Systemd running under Docker, and it seems to be functioning just fine, so I don't think that's the issue, but I wouldn't be surprised if it's related. I'm using Docker as a development environment, but I will be using these formula to orchestrate virtual machines in production.
Has anyone encountered the network module not being loaded properly? Is there something that needs to be available for that module to be accessible?
I have other mechanisms to get the IP address, but none that are as easy to work with in other salt formulas.
It turns out my problem was that I had my own custom module called "network" which was obscuring the upstream network module.
I'm pretty sure this was working at some point in the past, so I'm wondering if there might have been a change to salt in a more recent version that would cause it to conflict at a module level instead of merging methods from different modules of the same name, but I suppose it's possible that it never worked.

What Mongo GUI tools can I use to connect to my deployed Meteor app?

Tools like MongoLab remote connection and RockMongo require a permanent URL, so the URLs generated by "meteor mongo --url" that are only valid for 1 minute don't work for long.
If you're on a mac I would recommend that fononauts build of MongoHub you put up, the ordinary Mongohub is quite buggy & on Windows use MongoVue which is perhaps the best one i've used of all.

Resources