Graphite Docker not publishing Dashboard - graphite

Following this link I installed graphite with docker.
Now I should be able to find graphite on localhost/dashboard
But there's nothing. What I'm missing? Looked quite straight forward.

When running docker on macOS you need to use docker-machine,
That sets the docker IP to export DOCKER_HOST="tcp://192.168.99.100:2376"
So to access the dashboard you need to type http://192.168.99.100/dashboard

Related

ACORE API, assistance with errors and deployment

I'm having trouble with setting up ACORE API's and then having them work on a website.
Background:
Azerothcore running 3.3.5 on a debian standalone server, this has the Database, Core files and runs both the world and auth server basically a standard setup that is shown in the how-to wiki.
I also have a standalone web server, on the same subnet, but it's a separate server running linux and normal web server stuff, this has a wordpress installation with azerothcore plugin for user signup etc.
I'm trying to add the player map (https://github.com/azerothcore/playermap) and the ACORE-API set of functions (server status, arenastats, BG que and wow statistics) (https://github.com/azerothcore/acore-api)
Problem:
I understand the acore-api must be run in a container (docker or whatever) on the server, which I have done and it binds to port 3000, I can then go to the local ip:3000 and it brings up this error. (all db's etc are connecting and soap is working)
error 404 when navigating to IP:3000
I do get a few errors when running NPM install seen here: I'm not sure if they would be causing any issues or not.
screenshot of NPM errors on install
But further that, when I put say 'serverstatus' on the webserver (separate server) and configure the config.ts file I can't seem to get anything to display.
I'm not sure what I'm doing wrong but is the same scenario for all of the different functions for the acore-api
How are these meant to be installed and function? I feel I'm missing a vital step.
Likewise, with PLAYERMAP I have edited the comm_conf.php and set the realmd_id, but when loading the page, I do get the map, but the uptime is missing and no players are shown?
Could someone assist if possible?
Seems like an issue with NodeJS version. Update your NodeJS to latest LTS version 16.13.0 (https://nodejs.org)

Curl error 7 when installing apps on nextcloud

Hey this is my first question on here, so go easy.
I set up a Nextcloud server on my homelab in an ubuntu server 20.04 vm using the snap install. I have a seperate vm running nginx as a reverse proxy to my Nextcloud instance. Everything works flawlessly as intended, except that when I try and install apps on Nextcloud, I get a curl error #7. I've tried using my lan ip through the web ui, my public domain name through the web ui, and the commandline using the nextcloud.occ app:install command. I always get the same error. I tried to find the appropriate log file to get more information but looking in /var/snap/nextcloud/current/log/ I couldn't find any relevant info in any of the logs. Running php -m comes up with php not installed, I guess because php is installed via the Nextcloud snap? Obviously php is installed somewhere because Nextcloud is running, but I don't know how to look and see what modules are enabled, or how to install new ones using the snap. Any help on what to do is much appreciated!
enter image description here
Update: I fixed it. I think I had improperly configured my firewall, and turning it off (in proxmox)/making some changes to my /etc/netplan/*.yaml file to properly configure the static IP fixed it. GL
Another reason can be a wrongliy configured network. I forgot to set the gateway/proxy for IPv4, so github.com was unreachable. Most other services I use seem to resolve IPv6 first, so I did not have any other problems besides updating nextcloud apps.

How to use dask jupyterlab extensions behind nginx proxy

Im running a small scientific cluster in our lab. Jupyterhub is installed to run jupyter notebooks with python/julia/r for multiple users. we are new to dask
Dask and the jupyterlab extensions work fine if I run them locally on a node and acces through 127.0.0.1
However I can’t get dask to play nice with the nginx proxy we normally use to connect to jupyterhub. The status pages still point to 127.0.0.1 instead of the access node IP.
Any hints are appreciated.
Our setup
Nginx<——->jupyterhub on access node
Slurm scheduler
8 compute nodes
All on same subnet
somehow, I'm not the only one. see this thread:
https://github.com/dask/dask-labextension/issues/41
however, it is totally unclear for me howto tackle this since..
If someone could outline the steps it would be really helpful.

I am unable to vagrant up

I am using a file from GitHub
It has a vagrant file with it. When I run vagrant up command in my terminal, I get an error.
The terminal should show READ ABOVE message when successful download
I want to type in the address to the site on my browser to start a local development server.
Its pretty old file and the repo was using puphpet but this project seems dead for 2 years, the website is down.
In your case, vagrant is trying to download the box from internet but the owner of this box hosted it under the puphpet domain not available anymore
I am not sure what's the best way to help now:
find another more recent example and start from there
if you want to fix this, you will need https://github.com/LearnWebCode/vagrant-lamp/blob/master/puphpet/config.yaml#L6 and use a different box available on vagrant site, ubuntu 16.04 is pretty old now but you can search one from vagrant box

Open Stack Volume won't attach

I am using openstack to create a Centos7 VM.
I can get the VM to run but the installer hits a snag at the first page.
It needs a Disk to install to (Installation Destination)
I thought this was the volume that I attached using the openstack app. I used the volume's edit attachments and it pops up saying it will attach it; the volume is never listed as attached to ANY instance I attach it to.
It also needs an Installation Source, which I was using the URL from the mirror site I used. Here is the URL:
ISO URL
I used the net Install ISO. I tried the same url for the installation source and I also tried the URL but change isos to os or this:
OS URL
Thanks for any help.
when you create VMs in Openstack you are not supposed to go through the installation process. In the cloud you use cloud images that are ready to boot.
You should use a Centos cloud image.
Try to load this Centos7 image into your openstack glance:
http://ubuntu.mirror.cloud.switch.ch/engines/images/2016-04-15/centos7.raw
You should be able to boot your VM and boot with the username centos and the public key you provide with cloud-init.

Resources