Ghost config.js file - ghost-blog

I m actually trying to create a simple blog using ghost, and I m facing a problem when starting in production envrionnement.
I m having the v0.7.1 and here's my config file (production part)
production: {
url: 'http://<my-public-ip>',
mail: {},
database: {
client: 'sqlite3',
connection: {
filename: path.join(__dirname, '/content/data/ghost.db')
},
debug: false
},
server: {
host: '127.0.0.1',
port: '2368'
}
}
The fact is that when I try to access my public IP on a browser, I cant get anything at all on the screen(404 not found), even if I try on the 2368 port.
My firewall rules are well set.
what am I doing wrong ?

In the server object the host should be 0.0.0.0
server: {
host: '0.0.0.0',
port: '2368'
}

In the server object change the host .
host: '127.0.0.1', --> host: '0.0.0.0'
now start the ghost server by
npm start --production

Related

How to use nginx module on Filebeat in k8s

I am trying to use filebeat with nginx module to collect logs from nginx-ingress-controller and send directly to elasti but I keep getting an error:
Provided Grok expressions do not match field value: [172.17.0.1 - - [03/Dec/2022:00:05:01 +0000] \"GET /healthz HTTP/1.1\" 200 0 \"-\" \"kube-probe/1.24\" \"-\"]
This appear on Kibana under message error.
Note that I am running the latest helm for filebeat (8.5) and the nginx controller is nginx-ingress-controller-9.2.15 1.2.1.
My filebeat setting:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: false
templates:
- condition:
contains:
kubernetes.pod.name: redis
config:
- module: redis
log:
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
- condition:
contains:
kubernetes.pod.name: nginx
config:
- module: nginx
access:
enabled: true
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl.certificate_authorities: ["/usr/share/filebeat/certs/ca.crt"]
setup.ilm:
enabled: true
overwrite: true
policy_file: /usr/share/filebeat/ilm.json
setup.dashboards.enabled: true
setup.kibana.host: "http://kibana:5601"
ilm.json: |
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "7d",
"actions": {
"delete": {}
}
}
}
}
}
And the logs from the controller are:
172.17.0.1 - - [02/Dec/2022:23:43:49 +0000] "GET /healthz HTTP/1.1" 200 0 "-" "kube-probe/1.24" "-"
Can someone help me understand what am I doing wrong?

Meteor app on digitalocean with https://app and https//www. Sometimes fails to serve https://app

Problem:
My app runs on digitalocean droplet with multiple domains:
proxy: {
domains: 'example.com,www.example.com',
ssl: {
letsEncryptEmail: '#'
}
}
Sometimes, for about half an hour the https://example.com fails to load completely but indirect links like https://example.com/about works fine.
Tried:
fiddling with nginx option:
nginxServerConfig: './nginx.conf',
Any attempts with it failed loading page completely
Mup.js file:
module.exports = {
servers: {
one: {}
},
app: {
deployCheckWaitTime: 300,
name: 'example',
path: '../',
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'https://example.com',
MONGO_URL: 'mongodb://mongodb:27017/example',
},
docker: {
image: 'abernix/meteord:node-8.4.0-base',
args: ['--link=mongodb:mongodb'],
},
enableUploadProgressBar: true
},
proxy: {
domains: 'example.com,www.example.com',
ssl: {
letsEncryptEmail: '#'
}
}
};
Turns out, that problem lied in mailgun.
The mailgun DNS records didn't match v=spf1 include:eu.mailgun.org ~all thus those mails weren't authorized and whenever mail was sent through the system it was tripping domain provider to refresh it's DNS.
I solved this issue by setting up a permanent redirect for www through my domain settings.

Meteor Up Setup Error on DigitalOcean

I am trying to deploy a Meteor app onto a DigitalOcean droplet, via its IP address (I have no domain name). I am doing this kind of thing for the first time, and am having a lot of issues with it.
This is my droplet on Digital Ocean:
I created a MUP (Meteor Up) directory outside my Meteor app’s repo using mup init, and this is the mup.js file that I have:
module.exports = {
servers: {
one: {
host: 'http://162.243.57.207',
username: 'cs673f16',
pem: '/Users/gautambhat/.ssh/id_rsa'
// password:
// or leave blank for authenticate from ssh-agent
}
},
meteor: {
name: 'meetcute',
path: '/Users/gautambhat/Repos/CS673_team2',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'http://162.243.57.207',
PORT : 3000
//MONGO_URL: 'mongodb://localhost/meteor'
},
//dockerImage: 'kadirahq/meteord'
deployCheckWaitTime: 60
},
mongo: {
oplog: true,
port: 27017,
servers: {
one: {},
},
},
};
Also, I don't know my MONGO_URL, or where to find it, so I just commented it out. On running mup setup, I get the following error:
Started TaskList: Setup Docker
[http://162.243.57.207] - setup docker
Error getaddrinfo ENOTFOUND http://162.243.57.207 http://162.243.57.207:22
Can anyone point me in the right direction?
The error—as described in your original post—is as follows:
Started TaskList: Setup Docker
[http://162.243.57.207] - setup docker
Error getaddrinfo ENOTFOUND http://162.243.57.207 http://162.243.57.207:22
The error basically means it can’t find the host http://162.243.57.207. So let’s look at the `servers part of your configuration:
servers: {
one: {
host: 'http://162.243.57.207',
username: 'cs673f16',
pem: '/Users/gautambhat/.ssh/id_rsa'
// password:
// or leave blank for authenticate from ssh-agent
}
},
Your host setting is a URL when it should be a hostname or IP address; meaning host: 'http://162.243.57.207' should just be 162.243.57.207. So change that and try again:
servers: {
one: {
host: '162.243.57.207',
username: 'cs673f16',
pem: '/Users/gautambhat/.ssh/id_rsa'
// password:
// or leave blank for authenticate from ssh-agent
}
},

Livereload a local url using Grunt watch

I'm developing a small HTML/JS/SASS project without an backend/server and I'm trying to get Livereloading to work.
In my grunt.initConfig I have:
options: {
livereload: true,
},
But in Chrome I can't activate the Liverload plugin on a local html file.
file:///C:/Users/alucardu/Documents/Visual%20Studio%202015/Projects/JS-demo/JS-demo/index.html
Is this possible to do or do I need to run a server?
Apparently Grunt has something for this called connect > https://github.com/gruntjs/grunt-contrib-connect
When I installed it I added this to my gruntFile:
connect: {
server: {
options: {
open: true,
keepalive: true,
hostname: 'localhost',
port: 8080,
base: ''
}
}
},
And now I can run a local static server :)

Connect remote RabbitMq server

I am using RabbitMQ server for send message in my Symfony2 application.
I have used OldSoundRabbitMqBundle for this.
After successful installation of RabbitMq server on my application server it is working fine.
But when I install RabbitMQ server on different machine and try to connect it from my application server it is not connecting.
I have given the connection config as follows:
old_sound_rabbit_mq:
connections:
default:
host: myrabbitserverIp
port: 80
user: 'test'
password: 'test'
vhost: '/'
lazy: false
producers:
messages:
connection: default
exchange_options: {name: 'messages', type: direct}
consumers:
messages:
connection: default
exchange_options: {name: 'messages', type: direct}
queue_options: {name: 'messages'}
callback: message.amqp_consumer
Do I need to change any configuration for RabbitMq server?

Resources