I had a gitlab environnement that was working fine on CentOS 6, using port 8081 (apache already uses 80), and a yum update passed and now, I can't access my gitlab : 502 Bad Gateway
The issue is... I don't understand much of what I'm doing. Configurating gitlab at first was not easy... But this time, I don't know what to do.
In nginx logs (nginx has been installed from gitlab), I have :
2016/01/28 18:37:37 [crit] 31765#0: *1 connect() to unix:localhost:8081 failed (2: No such file or directory) while connecting to upstream, client: 59.118.47.19, server: gitlab.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://unix:localhost:8081:/", host: "gitlab.mydomain.com:8081"
I already tried to check the conf, to use gitlab-ctl reconfigure, restart, but this doesn't seem to be changing anything at all.
Here is the result from gitlab-ctl show-config
gitlab-ctl show-config
[2016-01-28T18:42:27+01:00] WARN: Ohai::Config[:disabled_plugins] is set. Ohai::Config[:disabled_plugins] is deprecated and will be removed in future releases of ohai. Use ohai.disabled_plugins in your configuration file to configure :disabled_plugins for ohai.
Starting Chef Client, version 12.5.1
resolving cookbooks for run list: ["gitlab::show_config"]
Synchronizing Cookbooks:
- package (0.0.0)
- runit (0.14.2)
- gitlab (0.0.1)
Compiling Cookbooks...
{
"gitlab": {
"bootstrap": {
},
"omnibus-gitconfig": {
},
"manage-accounts": {
},
"user": {
"git_user_email": "gitlab#gitlab.mydomain.com"
},
"redis": {
},
"ci-redis": {
},
"gitlab-rails": {
"secret_token": "2e6c9197f8a81829348e8a29d0608976189abe04b0e28384428c49abcb8c8c9a5b95cfeac5b5f3e59",
"gitlab_host": "gitlab.mydomain.com",
"gitlab_email_from": "gitlab#gitlab.mydomain.com",
"gitlab_https": false,
"gitlab_port": 8081,
"shared_path": "/var/opt/gitlab/gitlab-rails/shared",
"artifacts_path": "/var/opt/gitlab/gitlab-rails/shared/artifacts",
"lfs_storage_path": "/var/opt/gitlab/gitlab-rails/shared/lfs-objects",
"pages_path": "/var/opt/gitlab/gitlab-rails/shared/pages",
"db_username": "gitlab",
"db_host": null,
"db_port": 5432
},
"gitlab-ci": {
"secret_token": null,
"secret_key_base": "18f7e0e892c1cba94454a289abc285492afvd42909fbez364c9a4a45efc7c092b678e5b6bb986383",
"db_key_base": "3fd05caaefd2ez3d5c5abc184fezi71962480936de4674609a6802b",
"db_username": "gitlab_ci",
"db_host": null,
"db_port": 5432
},
"gitlab-shell": {
"secret_token": "95f9af3zff567dcd70fbbc847dc67679d5e3717498294fhe97f978756f14bcca26f94b48db43f85008c1fab280d1e1ce3c21d4156369b"
},
"unicorn": {
},
"ci-unicorn": {
},
"sidekiq": {
},
"ci-sidekiq": {
},
"gitlab-workhorse": {
"listen_addr": "localhost:8081",
"auth_socket": "/var/opt/gitlab/gitlab-rails/sockets/gitlab.socket"
},
"mailroom": {
},
"nginx": {
"listen_port": 8081
},
"ci-nginx": {
"listen_port": 80
},
"mattermost-nginx": {
"listen_port": null
},
"pages-nginx": {
"listen_port": null
},
"logging": {
},
"remote-syslog": {
},
"logrotate": {
},
"high-availability": {
},
"postgresql": {
},
"web-server": {
},
"mattermost": {
"email_invite_salt": "5723719e29c111gr11c34905c25e838da",
"file_public_link_salt": "ede2ceb23d88reg83cc7060852525050251",
"email_password_reset_salt": "9a79cff28e13fe248e53c3a13dc0d1f3aee",
"sql_at_rest_encrypt_key": "f57d4bf0284dff8db5691ffc489b3f8312a1",
"sql_data_source": "user=gitlab_mattermost host=/var/opt/gitlab/postgresql port=5432 dbname=mattermost_production",
"sql_data_source_replicas": [
"user=gitlab_mattermost host=/var/opt/gitlab/postgresql port=5432 dbname=mattermost_production"
]
},
"external-url": "http://gitlab.mydomain.com:8081",
"ci-external-url": null,
"mattermost-external-url": null,
"pages-external-url": null
}
}
Converging 0 resources
Running handlers:
Running handlers complete
Chef Client finished, 0/0 resources updated in 01 seconds
If you have any pointers to what I could do/check, that would be great because I am kind of stucked (and as I don't understand much of what I'm doing... That doesn't help...)
Thank you !
Related
This is the first time i am deploying from Meteor-up and I followed the docs to deploy a dummy application first. I am deploying on a shared linux server. Everything is going great but i can't find my app on ROOT_URL. My domain is pointing to the server and that very domain is also my ROOT_URL. when I hit the domain link it goes to the index of file explorer on the server instead of my web app. I tried to find logs but logs command and --verbose flag returned no log and the command simply run as usual.
Mup version (``1.5.3`):
Mup config
{ "servers": {
"one": {
"host": "1.2.3.4",
"username": "totalti1",
"password": "password",
"opts": {
"port": 2083
}
} }, "proxy": {
"servers": {
"one": {}
},
"domains": "host.com,subdomain.host.com",
"shared": {
"httpPort": 80,
"httpsPort": 443
} }, "app": {
"name": "my-app",
"path": "../.",
"deployCheckWaitTime": 300,
"servers": {
"one": {}
},
"buildOptions": {
"serverOnly": true
},
"env": {
"ROOT_URL": "https://host.com",
"MONGO_URL": "mongodb://mongodb:27017/my-app",
"MONGO_OPLOG_URL": "mongodb://localhost/local",
"VIRTUAL_HOST": "host.com,subdomain.host.com",
"HTTPS_METHOD": "noredirect",
"VIRTUAL_PORT": 3000,
"HTTP_FORWARDED_COUNT": 1
},
"docker": {
"image": "abernix/meteord:node-12-base",
"prepareBundle": false,
"stopAppDuringPrepareBundle": true,
"imagePort": 3000,
"args": [
"--link=mongodb:mongodb"
]
},
"enableUploadProgressBar": true,
"type": "meteor" }, "mongo": {
"version": "3.4.1",
"servers": {
"one": {}
},
"dbName": "DemoApp" } }
the port of my host is 2083 and I am not sure if that is causing a problem. I am not sure if the deployment was unsuccessful or the URL had a mistake. I was able to get some log after setting the debug Environmental variable. And here is it.
Output of command
$ DEBUG=mup* mup reconfig --verbose
mup:updates checking for updates +0ms
mup:updates Packages: [ { name: 'mup', path: '/usr/lib/node_modules/mup/package.json' } ] +2ms
mup:updates retrieving tags for mup +2ms
mup:api Running command default.reconfig +0ms
mup:module:default exec => mup reconfig +0ms
mup:api Running command meteor.envconfig +2ms
mup:module:meteor exec => mup meteor envconfig +0ms
Started TaskList: Configuring App [213.136.76.119] - Pushing the Startup Script
mup:updates finished update check for mup +1s
I am looking for some instant help as i am stuck on this deployment for three days now. Thanks in Advance
EDIT
Is there a way to know that the deployment was successful or not. Also is there something wrong with my ROOT_URL? Root url contains the IP of server on which i have hosted the app. The domain also points to the IP. When I access by IP it says
Sorry!
IP changed or server misconfig or site may have moved to different ip. Contact your hosting provider.
When i access via domain it shows the empty directory the default domain is set to.
Problem:
My app runs on digitalocean droplet with multiple domains:
proxy: {
domains: 'example.com,www.example.com',
ssl: {
letsEncryptEmail: '#'
}
}
Sometimes, for about half an hour the https://example.com fails to load completely but indirect links like https://example.com/about works fine.
Tried:
fiddling with nginx option:
nginxServerConfig: './nginx.conf',
Any attempts with it failed loading page completely
Mup.js file:
module.exports = {
servers: {
one: {}
},
app: {
deployCheckWaitTime: 300,
name: 'example',
path: '../',
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'https://example.com',
MONGO_URL: 'mongodb://mongodb:27017/example',
},
docker: {
image: 'abernix/meteord:node-8.4.0-base',
args: ['--link=mongodb:mongodb'],
},
enableUploadProgressBar: true
},
proxy: {
domains: 'example.com,www.example.com',
ssl: {
letsEncryptEmail: '#'
}
}
};
Turns out, that problem lied in mailgun.
The mailgun DNS records didn't match v=spf1 include:eu.mailgun.org ~all thus those mails weren't authorized and whenever mail was sent through the system it was tripping domain provider to refresh it's DNS.
I solved this issue by setting up a permanent redirect for www through my domain settings.
When I used the following config deploying to digitalocean, I did a $ mup setup with no errors and then ran $ mup deploy and everything went well until it tried to prepare bundle...what am I missing here?
This is the error I'm getting with the following config when deploying...
module.exports = {
servers: {
one: {
host: 'xxx.xxx.xxx.xxx',
username: 'root',
pem: "~/.ssh/id_rsa"
}
},
app: {
name: 'hotel',
path: '../hotel',
// this is my local path to the application root
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'http://xxx.xxx.xxx.xxx',
MONGO_URL: 'mongodb://localhost/meteor',
},
docker: {
image: 'abernix/meteord:base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
}
};
When I want use
"constraints": [["hostname", "CLUSTER", "192.168.18.6(1|2)"]]
or
"constraints": [["hostname", "CLUSTER", "DCOS-S-0(1|2)"]]
In Marathon app name "/zaslepki/4maxpl" has all the time Waiting status
So I try use attribute - I execute:
[root#DCOS-S-00 etc]# systemctl stop dcos-mesos-slave-public.service
[root#DCOS-S-00 etc]# mesos-slave --work_dir=/var/lib/mesos/slave --attributes=DC:DL01 --master=zk://192.168.18.51:2181,192.168.18.51:2181,192.168.18.53:2181/mesos
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1229 13:16:19.800616 24537 main.cpp:243] Build: 2016-11-07 21:31:04 by
I1229 13:16:19.800720 24537 main.cpp:244] Version: 1.0.1
I1229 13:16:19.800726 24537 main.cpp:251] Git SHA: d5746045ac740d5f28f238dc55ec95c89d2b7cd9
I1229 13:16:19.807195 24537 systemd.cpp:237] systemd version `219` detected
I1229 13:16:19.807232 24537 main.cpp:342] Inializing systemd state
I1229 13:16:19.820071 24537 systemd.cpp:325] Started systemd slice `mesos_executors.slice`
I1229 13:16:19.821051 24537 containerizer.cpp:196] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
I1229 13:16:19.825422 24537 linux_launcher.cpp:101] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
I1229 13:16:19.826690 24537 main.cpp:434] Starting Mesos agent
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#730: Client environment:host.name=DCOS-S-00
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#737: Client environment:os.name=Linux
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#738: Client environment:os.arch=3.10.0-514.2.2.el7.x86_64
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#739: Client environment:os.version=#1 SMP Tue Dec 6 23:06:41 UTC 2016
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#747: Client environment:user.name=root
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#755: Client environment:user.home=/root
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#log_env#767: Client environment:user.dir=/opt/mesosphere/etc
2016-12-29 13:16:19,827:24537(0x7f8ecae60700):ZOO_INFO#zookeeper_init#800: Initiating client connection, host=192.168.18.51:2181,192.168.18.51:2181,192.168.18.53:2181 sessionTimeout=10000 watcher=0x7f8ed221a030 sessionId=0 sessionPasswd=<null> context=0x7f8ebc001ee0 flags=0
I1229 13:16:19.828233 24537 slave.cpp:198] Agent started on 1)#192.168.18.60:5051
2016-12-29 13:16:19,828:24537(0x7f8ec8c49700):ZOO_INFO#check_events#1728: initiated connection to server [192.168.18.51:2181]
I1229 13:16:19.828263 24537 slave.cpp:199] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --attributes="DC:DL01" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --image_provisioner_backend="copy" --initialize_driver_logging="true" --ip_discovery_command="/opt/mesosphere/bin/detect_ip" --isolation="posix/cpu,posix/mem" --launcher_dir="/opt/mesosphere/packages/mesos--253f5cb0a96e2e3574293ddfecf5c63358527377/libexec/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://192.168.18.51:2181,192.168.18.51:2181,192.168.18.53:2181/mesos" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/var/lib/mesos/slave"
I1229 13:16:19.829263 24537 slave.cpp:519] Agent resources: cpus(*):8; mem(*):6541; disk(*):36019; ports(*):[31000-32000]
I1229 13:16:19.829306 24537 slave.cpp:527] Agent attributes: [ DC=DL01 ]
I1229 13:16:19.829319 24537 slave.cpp:532] Agent hostname: DCOS-S-00
2016-12-29 13:16:19,832:24537(0x7f8ec8c49700):ZOO_INFO#check_events#1775: session establishment complete on server [192.168.18.51:2181], sessionId=0x1593f6a1ef20fce, negotiated timeout=10000
I1229 13:16:19.832623 24548 state.cpp:57] Recovering state from '/var/lib/mesos/slave/meta'
I1229 13:16:19.832695 24547 group.cpp:349] Group process (group(1)#192.168.18.60:5051) connected to ZooKeeper
I1229 13:16:19.832723 24547 group.cpp:837] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1229 13:16:19.832736 24547 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I1229 13:16:19.834234 24547 detector.cpp:152] Detected a new leader: (id='70')
I1229 13:16:19.834319 24547 group.cpp:706] Trying to get '/mesos/json.info_0000000070' in ZooKeeper
I1229 13:16:19.835002 24547 zookeeper.cpp:259] A new leading master (UPID=master#192.168.18.53:5050) is detected
Failed to perform recovery: Incompatible agent info detected.
------------------------------------------------------------
Old agent info:
hostname: "192.168.18.60"
resources {
name: "ports"
type: RANGES
ranges {
range {
begin: 1
end: 21
}
range {
begin: 23
end: 5050
}
range {
begin: 5052
end: 32000
}
}
role: "slave_public"
}
resources {
name: "disk"
type: SCALAR
scalar {
value: 37284
}
role: "slave_public"
}
resources {
name: "cpus"
type: SCALAR
scalar {
value: 8
}
role: "slave_public"
}
resources {
name: "mem"
type: SCALAR
scalar {
value: 6541
}
role: "slave_public"
}
attributes {
name: "public_ip"
type: TEXT
text {
value: "true"
}
}
id {
value: "8bc3d621-ed8a-4641-88c1-7a7163668263-S9"
}
checkpoint: true
port: 5051
------------------------------------------------------------
New agent info:
hostname: "DCOS-S-00"
resources {
name: "cpus"
type: SCALAR
scalar {
value: 8
}
role: "*"
}
resources {
name: "mem"
type: SCALAR
scalar {
value: 6541
}
role: "*"
}
resources {
name: "disk"
type: SCALAR
scalar {
value: 36019
}
role: "*"
}
resources {
name: "ports"
type: RANGES
ranges {
range {
begin: 31000
end: 32000
}
}
role: "*"
}
attributes {
name: "DC"
type: TEXT
text {
value: "DL01"
}
}
id {
value: "8bc3d621-ed8a-4641-88c1-7a7163668263-S9"
}
checkpoint: true
port: 5051
------------------------------------------------------------
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/slave/meta/slaves/latest
This ensures agent doesn't recover old live executors.
Step 2: Restart the agent.
[root#DCOS-S-00 etc]# rm -f /var/lib/mesos/slave/meta/slaves/latest
[root#DCOS-S-00 etc]# systemctl start dcos-mesos-slave-public.service
and I use in .json application configuration file
"constraints": [["DC", "CLUSTER", "DL01"]]
Status application is Waiting.....
This is my .json file aplication "/zaslepki/4maxpl"
{
"id": "/zaslepki/4maxpl",
"cmd": null,
"cpus": 0.5,
"mem": 256,
"disk": 0,
"instances": 2,
"constraints": [["hostname", "CLUSTER", "DCOS-S-0(3|4)"]],
"acceptedResourceRoles": [
"slave_public"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "arekmax/4maxpl",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10015,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": false
}
},
"healthChecks": [
{
"path": "/",
"protocol": "HTTP",
"portIndex": 0,
"gracePeriodSeconds": 300,
"intervalSeconds": 30,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 2,
"ignoreHttp1xx": false
}
],
"labels": {
"HAPROXY_GROUP": "external"
},
"portDefinitions": [
{
"port": 10015,
"protocol": "tcp",
"labels": {}
}
]
}
What I do wrong? I find that same problem link but there problem was fixed by use
constraints: [["DC", "CLUSTER", "DL01"]]
You've got a clue in a log:
Invalid attribute key:value pair 'DL01'
Change your attribute to key:value pair e.g., DC:DL01 and it should work. Probably you will need to clean metadata directory because you are changing Agent configuration.
Cluster operator doesnt work with multiple values. You need to pass regular expression so your it should looks like this
"constraints": [["hostname", "LIKE", "192.168.18.6(1|2)"]]
I have a Meteor application that I'm trying to deploy on a VPS. I am using Meteor Up to do this.
By following the instructions I have set up my mup.js file to look like this:
module.exports = {
servers: {
one: {
host: '41.185.27.69',
username: 'root',
// pem:
password: 'secret',
// or leave blank for authenticate from ssh-agent
opts: {
port: 22
}
}
},
meteor: {
name: 'HelderbergLink',
path: '../../HelderbergLink',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'http://41.185.27.69',
MONGO_URL: 'mongodb://127.0.0.1:27017/HelderbergLink'
},
dockerImage: 'abernix/meteord:base',
deployCheckWaitTime: 60
},
mongo: {
oplog: true,
port: 27017,
servers: {
one: {},
},
},
};
After this is set up I run the following command in my .deploy directory:
mup.cmd setup
Everything setups successfully here, but then when I need to run the next command:
mup.cmd deploy
It runs through most of the process but then gives me this error:
I'm not sure what to do to resolve this. Any help would be much appreciated!