I am able to access my JupyterHub domain but am unable to get a pod to spawn.
This is from kubectl logs <pod> -n
[W 2021-06-04 19:59:07.037 SingleUserNotebookApp configurable:190] Config option `open_browser` not recognized by `SingleUserNotebookApp`. Did you mean `browser`?
[I 2021-06-04 19:59:07.052 SingleUserNotebookApp notebookapp:1593] Authentication of /metrics is OFF, since other authentication is disabled.
[W 2021-06-04 19:59:08.649 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2021-06-04 19:59:08.649 LabApp] 'port' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your
config before our next release.
[W 2021-06-04 19:59:08.649 LabApp] 'port' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your
config before our next release.
[W 2021-06-04 19:59:08.649 LabApp] 'port' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your
config before our next release.
[I 2021-06-04 19:59:08.656 LabApp] JupyterLab extension loaded from /opt/conda/lib/python3.9/site-packages/jupyterlab
[I 2021-06-04 19:59:08.657 LabApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
Patching auth into jupyter_server.base.handlers.JupyterHandler(jupyter_server.base.handlers.AuthenticatedHandler) -> JupyterHandler(jupyterhub.singleuser.mixins.HubAuthenticatedHandler, jupyter_server.base.handlers.AuthenticatedHandler)
[I 2021-06-04 19:59:08.715 SingleUserNotebookApp mixins:576] Starting jupyterhub-singleuser server version 1.4.1
[E 2021-06-04 19:59:28.728 SingleUserNotebookApp mixins:449] Failed to connect to my Hub at http://hub:8081/phys101/hub/api (attempt 1/5). Is it running?
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/jupyterhub/singleuser/mixins.py", line 447, in check_hub_version
resp = await client.fetch(self.hub_api_url)
tornado.simple_httpclient.HTTPTimeoutError: Timeout while connecting
If I go to the webpage the loading bar is stuck at blue about 95% done and the message just says 2021-06-04T19:59:05Z [Normal] Started container notebook
Here is the rest of the event log from that page.
Event log
Server requested
2021-06-04T19:58:47Z [Normal] Successfully assigned phys101/jupyter-admin1 to gke-tf-jh-cluster-user-pool-40bd378f-sjj4
2021-06-04T19:58:55Z [Normal] AttachVolume.Attach succeeded for volume "pvc-03c4e983-363e-401f-8616-e7df5c64fdac"
2021-06-04T19:59:04Z [Normal] Container image "jupyterhub/k8s-network-tools:0.11.1" already present on machine
2021-06-04T19:59:05Z [Normal] Created container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Started container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Pulling image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Successfully pulled image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Created container notebook
2021-06-04T19:59:05Z [Normal] Started container notebook
Server requested
2021-06-04T19:58:47Z [Normal] Successfully assigned phys101/jupyter-admin1 to gke-tf-jh-cluster-user-pool-40bd378f-sjj4
2021-06-04T19:58:55Z [Normal] AttachVolume.Attach succeeded for volume "pvc-03c4e983-363e-401f-8616-e7df5c64fdac"
2021-06-04T19:59:04Z [Normal] Container image "jupyterhub/k8s-network-tools:0.11.1" already present on machine
2021-06-04T19:59:05Z [Normal] Created container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Started container block-cloud-metadata
2021-06-04T19:59:05Z [Normal] Pulling image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Successfully pulled image "jupyter/datascience-notebook:latest"
2021-06-04T19:59:05Z [Normal] Created container notebook
2021-06-04T19:59:05Z [Normal] Started container notebook
I see the pod come up, go into a running 1/1 state, then it terminates.
At one point I did see this error below and commented out the userPods.nodeAffinity which seems to have removed the message.
0/3 nodes are available: 1 pod has unbound immediate persistentvolumeclaims, 2 node(s) didn't match node selector.
Here's the rest of my config YAML.
proxy:
secretToken:
service:
type: NodePort
ingress:
enabled: true
pathSuffix: "/*"
hosts:
- "<DOMAIN.COM>"
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "jh-lb-global-test"
networking.gke.io/managed-certificates: "google-managed-ssl-certificate"
hub:
baseUrl: /${class-section}
config:
Authenticator:
admin_users:
- admin1
- admin2
allowed_users:
- user1
- user2
DummyAuthenticator:
password: <PASSWORD>
JupyterHub:
authenticator_class: dummy
singleuser:
startTimeout: 120
profileList:
- display_name: "Minimal environment"
description: "To avoid too much bells and whistles: Python."
default: true
- display_name: "Datascience environment"
description: "If you want the additional bells and whistles: Python, R, and Julia."
kubespawner_override:
image: jupyter/datascience-notebook:latest
- display_name: "Spark environment"
description: "The Jupyter Stacks spark image!"
kubespawner_override:
image: jupyter/all-spark-notebook:latest
memory:
limit: 1G
guarantee: 1G
cpu:
limit: .5
guarantee: .5
image:
# You should replace the "latest" tag with a fixed version from:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
pullPolicy: Always
tag: latest
defaultUrl: "/lab"
scheduling:
userScheduler:
enabled: false
podPriority:
enabled: false
# userPlaceholder:
# enabled: true
# replicas: 2
userPods:
nodeAffinity:
matchNodePurpose: require
# corePods:
# nodeAffinity:
# matchNodePurpose: require
cull:
enabled: true
timeout: 3600
every: 3600
# prePuller:
# hook:
# enabled: false
Related
As the subject says, I am doing the local installing running bash <(curl -Ls https://get.eucalyptus.cloud) but I am getting the following errors:
[Ansible] Installing Eucalyptus ansible package
Failed to set locale, defaulting to C
http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: [Errno 12] Timeout on http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: (28, 'Connection timed out after 30000 milliseconds')
Trying other mirror.
http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: [Errno 12] Timeout on http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: (28, 'Connection timed out after 30002 milliseconds')http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: [Errno 12] Timeout on http://builds.midonet.org/midonet-5.2/stable/el7/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
Trying other mirror.
It says it is trying other mirror but it doesn't appear that way.
I tried to ping the domain but no response and navigating to the root domain via a web browser fails to show anything. Is this something on my side or is the host really down?
This is my first time looking at Eucalyptus.cloud
if you have a webserver laying around
---
- name: make sure destination dir exists
file:
path: '/tmp/midonet/'
state: directory
tags:
- localinstall
- name: download a copy of the packages that should have been in the midonet repo
get_url:
url: 'http://pxe.server.lan/packages/midonet/{{ item }}'
dest: '/tmp/midonet/'
with_items:
- libzookeeper-3.4.8-4.x86_64.rpm
- libzookeeper-devel-3.4.8-4.x86_64.rpm
- lldpd-0.9.5-2.1.x86_64.rpm
- lldpd-debuginfo-0.9.5-2.1.x86_64.rpm
- lldpd-devel-0.9.5-2.1.x86_64.rpm
- midolman-5.2.2-1.0.el7.noarch.rpm
- midonet-cluster-5.2.2-1.0.el7.noarch.rpm
- midonet-selinux-1.0-2.el7.centos.noarch.rpm
- midonet-tools-5.2.2-1.0.el7.noarch.rpm
- python-midonetclient-5.2.2-1.0.el7.noarch.rpm
- python-zookeeper-3.4.8-4.x86_64.rpm
- quagga-0.99.23-0.el7.midokura.x86_64.rpm
- zkdump-1.05-1.noarch.rpm
- zkpython-3.4.5-2.x86_64.rpm
- zookeeper-3.4.8-4.x86_64.rpm
- zookeeper-debuginfo-3.4.8-4.x86_64.rpm
- zookeeper-lib-3.4.5-1.x86_64.rpm
register: lidownlowd
retries: 3
delay: 3
until: lidownlowd is not failed
tags:
- localinstall
- name: localinstall all packages from midonet repo
shell: yum -y localinstall *.rpm
args:
chdir: '/tmp/midonet/'
please refer to unable to install eucalyptus in centos 7.9
i'll give it a go and let you know the outcome
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Welcome,
Im trying to install Wordpress on kubernetes. I installed chart and I typed:
"helm install projectname-wordpress bitnami/wordpress --set allowOverrideNone=true"
but even it giving output its not working and I cant login into.
When I typed "kubectl describe pods"
I get output like this:
Name: projectname-wordpress-785d4c4c84-xzt6m
Namespace: default
Priority: 0
Node: skalowalne-node-73a107/59.813.226.646
Start Time: Fri, 28 May 2021 02:00:35 +0200
Labels: app.kubernetes.io/instance=projectname-wordpress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=wordpress
helm.sh/chart=wordpress-11.0.10
pod-template-hash=785d4c4c84
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/projectname-wordpress-785d4c4c84
Containers:
wordpress:
Container ID:
Image: docker.io/bitnami/wordpress:5.7.2-debian-10-r9
Image ID:
Ports: 8080/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 300m
memory: 512Mi
Liveness: http-get http://:http/wp-admin/install.php delay=120s timeout=5s period=10s #success=1 #failure=6
Readiness: http-get http://:http/wp-login.php delay=30s timeout=5s period=10s #success=1 #failure=6
Environment:
ALLOW_EMPTY_PASSWORD: yes
MARIADB_HOST: projectname-wordpress-mariadb
MARIADB_PORT_NUMBER: 3306
WORDPRESS_DATABASE_NAME: bitnami_wordpress
WORDPRESS_DATABASE_USER: bn_wordpress
WORDPRESS_DATABASE_PASSWORD: <set to the key 'mariadb-password' in secret 'projectname-wordpress-mariadb'> Optional: false
WORDPRESS_USERNAME: user
WORDPRESS_PASSWORD: <set to the key 'wordpress-password' in secret 'projectname-wordpress'> Optional: false
WORDPRESS_EMAIL: user#example.com
WORDPRESS_FIRST_NAME: FirstName
WORDPRESS_LAST_NAME: LastName
WORDPRESS_HTACCESS_OVERRIDE_NONE: no
WORDPRESS_ENABLE_HTACCESS_PERSISTENCE: no
WORDPRESS_BLOG_NAME: User's Blog!
WORDPRESS_SKIP_BOOTSTRAP: no
WORDPRESS_TABLE_PREFIX: wp_
WORDPRESS_SCHEME: http
WORDPRESS_EXTRA_WP_CONFIG_CONTENT:
WORDPRESS_AUTO_UPDATE_LEVEL: none
WORDPRESS_PLUGINS: none
Mounts:
/bitnami/wordpress from wordpress-data (rw,path="wordpress")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mxtw7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
wordpress-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: projectname-wordpress
ReadOnly: false
default-token-mxtw7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mxtw7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 15m (x181 over 9h) kubelet Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[wordpress-data default-token-mxtw7]: timed out waiting for the condition
Warning FailedMount 3m49s (x58 over 8h) kubelet Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[default-token-mxtw7 wordpress-data]: timed out waiting for the condition
Warning FailedAttachVolume 2m40s (x139 over 9h) attachdetach-controller AttachVolume.Attach failed for volume "ovh-managed-kubernetes-do2ymc-pvc-80079ec2-e6f9-4210-852e-04fa286f714c" : attachdetachment timeout for volume 3b160677-40e8-4170-9cc3-cdd58e230942
Name: projectname-wordpress-mariadb-0
Namespace: default
Priority: 0
Node: skalowalne-node-f1da93/59.83.226.180
Start Time: Fri, 28 May 2021 02:00:27 +0200
Labels: app.kubernetes.io/component=primary
app.kubernetes.io/instance=projectname-wordpress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=mariadb
controller-revision-hash=projectname-wordpress-mariadb-85d4cb8f7
helm.sh/chart=mariadb-9.3.11
statefulset.kubernetes.io/pod-name=projectname-wordpress-mariadb-0
Annotations: checksum/configuration: 878384c0d68b5abc46d5d5d719a9e83aa911941710552c3dfcebd48203ce5d9f
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/projectname-wordpress-mariadb
Containers:
mariadb:
Container ID:
Image: docker.io/bitnami/mariadb:10.5.10-debian-10-r0
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Liveness: exec [/bin/bash -ec password_aux="${MARIADB_ROOT_PASSWORD:-}"
if [[ -f "${MARIADB_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MARIADB_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
] delay=120s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [/bin/bash -ec password_aux="${MARIADB_ROOT_PASSWORD:-}"
if [[ -f "${MARIADB_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MARIADB_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
] delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
BITNAMI_DEBUG: false
MARIADB_ROOT_PASSWORD: <set to the key 'mariadb-root-password' in secret 'projectname-wordpress-mariadb'> Optional: false
MARIADB_USER: bn_wordpress
MARIADB_PASSWORD: <set to the key 'mariadb-password' in secret 'projectname-wordpress-mariadb'> Optional: false
MARIADB_DATABASE: bitnami_wordpress
Mounts:
/bitnami/mariadb from data (rw)
/opt/bitnami/mariadb/conf/my.cnf from config (rw,path="my.cnf")
/var/run/secrets/kubernetes.io/serviceaccount from projectname-wordpress-mariadb-token-92mm2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-projectname-wordpress-mariadb-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: projectname-wordpress-mariadb
Optional: false
projectname-wordpress-mariadb-token-92mm2:
Type: Secret (a volume populated by a Secret)
SecretName: projectname-wordpress-mariadb-token-92mm2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 19m (x41 over 8h) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[config projectname-wordpress-mariadb-token-92mm2 data]: timed out waiting for the condition
Warning FailedMount 9m51s (x36 over 8h) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[projectname-wordpress-mariadb-token-92mm2 data config]: timed out waiting for the condition
Warning FailedMount 5m21s (x161 over 9h) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data config projectname-wordpress-mariadb-token-92mm2]: timed out waiting for the condition
Warning FailedAttachVolume 2m48s (x139 over 9h) attachdetach-controller AttachVolume.Attach failed for volume "ovh-managed-kubernetes-do2ymc-pvc-fad9b535-f6d5-4e71-9e47-3a555936c546" : attachdetachment timeout for volume d96dbb2d-2200-48bd-940d-74dc0c3b5128
UPDATE: I dont have firewall enabled on cloud machine. Im using ovh services.
What should I do to make it working?
Failed events looks exacly like this just after try to deploy wordpress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m19s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 5m19s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 5m12s default-scheduler Successfully assigned default/projectname-wordpress-5466b7b45c-rzx9h to standard-node-fe7236
Warning FailedMount 3m10s kubelet Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[default-token-mxtw7 wordpress-data]: timed out waiting for the condition
Warning FailedAttachVolume 72s (x2 over 3m13s) attachdetach-controller AttachVolume.Attach failed for volume "ovh-managed-kubernetes-do2ymc-pvc-3e3686eb-6cf5-4697-99b0-0689bbd7d0a9" : attachdetachment timeout for volume f8b78a8d-f0d8-4dcb-bcae-ec84fb7d82e4
Warning FailedMount 56s kubelet Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[wordpress-data default-token-mxtw7]: timed out waiting for the condition
Logs from first pod
mariadb 16:49:02.01 mariadb 16:49:02.01 Welcome to the Bitnami mariadb
container mariadb 16:49:02.01 Subscribe to project updates by watching
https://github.com/bitnami/bitnami-docker-mariadb mariadb 16:49:02.02
Submit issues and feature requests at
https://github.com/bitnami/bitnami-docker-mariadb/issues mariadb
16:49:02.02 Send us your feedback at containers#bitnami.com mariadb
16:49:02.02 mariadb 16:49:02.02 INFO ==> ** Starting MariaDB setup **
mariadb 16:49:02.07 INFO ==> Validating settings in MYSQL_/MARIADB_
env vars mariadb 16:49:02.07 INFO ==> Initializing mariadb database
mariadb 16:49:02.09 INFO ==> Using persisted data mariadb 16:49:02.10
INFO ==> Running mysql_upgrade mariadb 16:49:02.10 INFO ==> Starting
mariadb in background mariadb 16:49:03.14 INFO ==> Stopping mariadb
Logs from second pod
Welcome to the Bitnami wordpress container Subscribe to project
updates by watching
https://github.com/bitnami/bitnami-docker-wordpress Submit issues and
feature requests at
https://github.com/bitnami/bitnami-docker-wordpress/issues Send us
your feedback at containers#bitnami.com
WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes.
For safety reasons, do not use this flag in a production environment.
nami INFO Initializing apache nami INFO apache successfully
initialized nami INFO Initializing php nami INFO php
successfully initialized nami INFO Initializing mysql-client nami
INFO mysql-client successfully initialized nami INFO Initializing
wordpress wordpre INFO ==> Preparing Varnish environment wordpre INFO
==> Preparing Apache environment wordpre INFO ==> Preparing PHP environment mysql-c INFO Trying to connect to MySQL server Error
executing 'postInstallation': Failed to connect to
student-mariadb:3306 after 36 tries
I reset kubernates and same code started to work. I made everything same so I dont know how. Thank you guys for support.
I'm trying to follow along this blog about using Docker with R.
I followed basic Docker set up steps and am able to run the hello world image.
I'm on a old 2009 Mac and had to use Docker Toolbox.
I'm in a place with weak internet connection and am using a personal hotspot.
Each time I try to run docker run --rm -p 8787:8787 rocker/verse I wait for a few minutes and see a downloading message, then I get a message "docker: unauthorized: authentication required."
I found this separate documentation which advised me to add a password:
docker run --rm -p 8787:8787 -e PASSWORD=blah rocker/rstudio
But I got the same result "docker: unauthorized: authentication required."
I did some Google searching and found some posts both here on SO and on Github but was unable to identify what is causing this error in my specific case.
I suspect my weak internet connection might have something to do with it since I seem to be able to download for about 10 or 15 minutes before seeing this message.
Here is Docker info:
Macs-MacBook:~ macuser$ docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 2
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.116-boot2docker
Operating System: Boot2Docker 18.09.6 (TCL 8.2.1)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.951GiB
Name: default
ID: XMCE:OBLV:CKEX:EGIB:PHQ7:MLHF:ZJSA:PGYN:OIMM:JI67:ETCI:JKBH
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Does anyone know where I can look to next in order to be able to pull and or run the rocker image?
Since upgrading to 6.8.7 using the rpm on RHEL 7, using systemctl start artifactory fails
Looking in the log its failing at this point
2019-03-16 09:50:28,952 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-16 09:50:29,379 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-16 09:50:30,625 [art-init] [WARN ] (o.j.a.c.AccessClientHttpException:41) - Unrecognized ErrorsModel by Access. Original message: Failed on executing /api/v1/system/ping, with response: Not Found
2019-03-16 09:50:30,634 [art-init] [ERROR] (o.a.s.a.AccessServiceImpl:364) - Could not ping access server: {}
org.jfrog.access.client.AccessClientHttpException: HTTP response status 404:Failed on executing /api/v1/system/ping, with response: Not Found
Previously we would get
2019-03-13 09:56:06,293 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-13 09:56:06,787 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-13 09:56:24,068 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:360) - Got response from Access server after 17280 ms, continuing.
Any suggestions on debugging whether this access server has started ?
Further to this I found logs showing when it worked it used to start a jar file
2019-03-08 09:19:11,609 [localhost-startStop-2] [INFO ] (o.j.a.AccessApplication:48) - Starting AccessApplication v4.1.48 on hostname.nexor.co.uk with PID 5913 (/opt/jfrog/artifactory/tomcat/webapps/access/WEB-INF/lib/access-application-4.1.48.jar started by artifactory in /)
Now when i look I find /opt/jfrog/artifactory/tomcat/webapps/access/ is empty so there is no jar file to run
The rpm did deliver an access.war file and that is there
$ ls -l /opt/jfrog/artifactory/webapps
total 104692
-rwxrwxr-x. 1 root root 51099759 Mar 14 12:14 access.war
-rwxrwxr-x. 1 root root 56099348 Mar 14 12:14 artifactory.war
Is there some manual step I can run to expand this war file to get the jar (as you can guess I am not up on my java apps)
Eventually got it working by deleting the empty /opt/jfrog/artifactory/tomcat/webapps/access directory and a new one containing the required jar files got created.
Not sure why this happened but that got it working for me
I had similar problem on CentOS 7, the solution was downgrade the newly updated java packages by running this command:
yum downgrade java-1.8.0*
After that restart the artifactory:
systemctl restart artifactory
Try changing the port number under your tomcat\conf\server.xml from 8081 to a different, unused port. Then, restart the Artifactory service to ensure the change takes effect.
This Helm chart (https://github.com/helm/charts/tree/master/stable/wordpress ) should install WordPress on Kubernetes with:
helm install stable/wordpress
However, I get:
kubectl get pods
NAME READY STATUS RESTARTS AGE
wp-1-mariadb-0 0/1 CreateContainerConfigError 0 30m
wp-1-wordpress-7bff96d46-4bss6 0/1 CrashLoopBackOff 8 30m
and
k logs wp-1-mariadb-0
Error from server (BadRequest): container "mariadb" in pod "wp-1-mariadb-0" is waiting to start: CreateContainerConfigError
and
kubectl describe pod wp-1-mariadb-0
Normal Pulled 5s (x5 over 48s) kubelet, minikube Container image "docker.io/bitnami/mariadb:10.1.36-debian-9" already present on machine
Warning Failed 5s (x5 over 48s) kubelet, minikube Error: failed to prepare subPath for volumeMount "config" of container "mariadb"
Any suggestions what the problem might be?