I have installed X-Pack on elasticsearch and start elasticsearch using command bin/elasticsearch .
and i have installed X-Pack on kibana and while starting kibana using bin/kibana ,
i’m getting result as mentioned below and kibana is not working :
log [08:40:56.376] [info][status][plugin:kibana#5.4.0] Status changed from uninitialized to green - Ready
log [08:40:56.526] [info][status][plugin:elasticsearch#5.4.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [08:40:56.541] [info][status][plugin:xpack_main#5.4.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [08:40:56.582] [warning] You're running Kibana 5.4.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v5.4.1 # 10.1.1.121:9200 (10.1.1.121)
log [08:40:56.609] [info][status][plugin:graph#5.4.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [08:40:56.629] [info][status][plugin:monitoring#5.4.0] Status changed from uninitialized to green - Ready
log [08:40:56.636] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml
log [08:40:56.645] [info][status][plugin:reporting#5.4.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [08:40:56.691] [info][status][plugin:elasticsearch#5.4.0] Status changed from yellow to green - Kibana index ready
log [08:40:56.702] [info][status][plugin:security#5.4.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [08:40:56.703] [warning][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
log [08:40:56.725] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended.
log [08:40:56.814] [info][status][plugin:searchprofiler#5.4.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [08:40:56.818] [warning][license][xpack] License information could not be obtained from Elasticsearch. [illegal_argument_exception] No endpoint or operation is available at [_xpack] :: {"path":"/_xpack","statusCode":400,"response":"{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"No endpoint or operation is available at [_xpack]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"No endpoint or operation is available at [_xpack]\"},\"status\":400}"}
log [08:40:56.821] [error][status][plugin:xpack_main#5.4.0] Status changed from yellow to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.823] [error][status][plugin:graph#5.4.0] Status changed from yellow to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.824] [error][status][plugin:reporting#5.4.0] Status changed from yellow to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.825] [error][status][plugin:security#5.4.0] Status changed from yellow to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.826] [error][status][plugin:searchprofiler#5.4.0] Status changed from yellow to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.833] [error][status][plugin:ml#5.4.0] Status changed from uninitialized to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.865] [info][status][plugin:ml#5.4.0] Status changed from red to yellow - Waiting for Elasticsearch
log [08:40:56.879] [error][status][plugin:tilemap#5.4.0] Status changed from uninitialized to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.884] [error][status][plugin:watcher#5.4.0] Status changed from uninitialized to red - X-Pack plugin is not installed on Elasticsearch cluster
log [08:40:56.926] [info][status][plugin:console#5.4.0] Status changed from uninitialized to green - Ready
log [08:40:56.936] [info][status][plugin:ml#5.4.0] Status changed from yellow to green - Ready
log [08:40:56.945] [info][status][plugin:metrics#5.4.0] Status changed from uninitialized to green - Ready
log [08:40:57.120] [info][status][plugin:timelion#5.4.0] Status changed from uninitialized to green - Ready
log [08:40:57.126] [fatal] Port 5601 is already in use. Another instance of Kibana may be running!
FATAL Port 5601 is already in use. Another instance of Kibana may be running!
Port 5601 is already in use. Another instance of Kibana may be
running!
As the error states, you already have something running on port 5601.
If your on on Linux, you can use lsof to identify the process using the port lsof -i :5601.
You can then kill the process using the PID in the results.
Related
Set up : Azure iot edge running on raspberry linux arm32v7.
used raspberry pi 4
IoTedge version : iotedge 1.4.3
Signed in to the Azure container registry from the edge device. Built and pushed a custom module to the container registry.Pulled that module image from the Azure container registry and tried to run the module using the docker run <image> command.
But it shows an error:
Unhandled exception. System.InvalidOperationException: Environment variable IOTEDGE_WORKLOADURI is required.
at Microsoft.Azure.Devices.Client.Edge.EdgeModuleClientFactory.CreateInternalClientFromEnvironmentAsync()
at SampleModuletest.ModuleBackgroundService.ExecuteAsync(CancellationToken cancellationToken) in /app/ModuleBackgroundService.cs:line 23
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at Program.<Main>$(String[] args) in /app/Program.cs:line 7
output screenshot
Found a post, but it's not the same problem I'm guessing. Outpus is different.
link
I have some doubts and if anyone can clear it would be very helpful.
What are the possible methods to deploy azure iot edge modules.
2.Is it possible to deploy a module from an edge device using a pulled module image from container registry? ?
Thanks in advance. Any suggestions would be appreciated.
Currently running edge modules:
NAME STATUS DESCRIPTION Config
edgeAgent running Up an hour mcr.microsoft.com/azureiotedge-agent:1.4
edgeHub running Up an hour mcr.microsoft.com/azureiotedge-hub:1.4
Docker images:
Docker images
iotedge check:
```
Configuration checks (aziot-identity-service)
---------------------------------------------
√ keyd configuration is well-formed - OK
√ certd configuration is well-formed - OK
√ tpmd configuration is well-formed - OK
√ identityd configuration is well-formed - OK
√ daemon configurations up-to-date with config.toml - OK
√ identityd config toml file specifies a valid hostname - OK
√ aziot-identity-service package is up-to-date - OK
√ host time is close to reference time - OK
√ preloaded certificates are valid - OK
√ keyd is running - OK
√ certd is running - OK
√ identityd is running - OK
√ read all preloaded certificates from the Certificates Service - OK
√ read all preloaded key pairs from the Keys Service - OK
√ check all EST server URLs utilize HTTPS - OK
√ ensure all preloaded certificates match preloaded private keys with the same ID - OK
Connectivity checks (aziot-identity-service)
--------------------------------------------
√ host can connect to and perform TLS handshake with iothub AMQP port - OK
√ host can connect to and perform TLS handshake with iothub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with iothub MQTT port - OK
Configuration checks
--------------------
√ aziot-edged configuration is well-formed - OK
√ configuration up-to-date with config.toml - OK
√ container engine is installed and functional - OK
√ configuration has correct URIs for daemon mgmt endpoint - OK
√ aziot-edge package is up-to-date - OK
√ container time is close to host time - OK
√ DNS server - OK
‼ production readiness: logs policy - Warning
Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
‼ production readiness: Edge Hub's storage directory is persisted on the host filesystem - Warning
The edgeHub module is not configured to persist its /tmp/edgeHub directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ Agent image is valid and can be pulled from upstream - OK
√ proxy settings are consistent in aziot-edged, aziot-identityd, moby daemon and config.toml - OK
Connectivity checks
-------------------
√ container on the default network can connect to upstream AMQP port - OK
√ container on the default network can connect to upstream HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to upstream AMQP port - OK
√ container on the IoT Edge module network can connect to upstream HTTPS / WebSockets port - OK
32 check(s) succeeded.
3 check(s) raised warnings. Re-run with --verbose for more details.
2 check(s) were skipped due to errors from other checks. Re-run with --verbose for more details.
When you execute the docker run <image> command, it will attempt to spin up your module with no additional configuration. However, you're using the Azure IoT Edge SDK, which requires additional settings. One of those is the IOTEDGE_WORKLOADURI environment variable.
To answer your questions directly:
What are the possible methods to deploy azure iot edge modules.
There's one way of doing this on an Azure IoT Edge device. It's by creating a deployment manifest in your IoT Hub. That deployment manifest will tell the Azure IoT Edge runtime on your device to pull the correct containers and set them up. You can learn how to do that here
Is it possible to deploy a module from an edge device using a pulled module image from container registry?
I'm going to assume you mean on an edge device, not from. You can execute a docker pull command to get the container, but deploying it really needs to happen with the beforementioned deployment manifest.
Just created a managed 2-node Kubernetes (ver. 1.22.8) cluster on DigitalOcean (DO).
After installing WordPress using Bitnami Helm chart, and then installing a WP plugin, the site became unreachable.
Looking into DO K8s dashboard in the deployment section, the wordpress deployment shows the following error:
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
AttachVolume.Attach failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
MountVolume.MountDevice failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50' output: "mke2fs 1.45.5 (07-Jan-2020)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50 does not exist and no size was specified.\n"
Readiness probe failed: HTTP probe failed with statuscode: 404
As I'm quite new to K8s, I don't know much how to troubleshoot this.
Any help would be much appreciated.
UPDATE
I was able to reproduce the error and found what triggers it.
WordPress Bitnami charts installs several WP plugins by default. As soon as I try to delete them, the error shows up and the persistent volume gets corrupted...
Is this maybe a bug or it's standard behavior?
I have installed on Fedora 35 jfrog-artifactory-oss (v7.31.11-73111900.x86_64) and enabled it as a system service to start at boot. But whenever I boot up my OS, the server never starts properly. I will always need to kill the PID of the active running Artifactory process. If I then do sudo service artifactory restart it will bring up the server cleanly and everything is good. How can I avoid having to do this little dance? Is there something about OS boot up that is causing Artifactory to get thrown off?
I have looked at console.log when the server is not running properly after bootup, I see some logs like:
2022-01-27T08:35:38.383Z [shell] [INFO] [] [artifactoryManage.sh:69] [main] - Artifactory Tomcat already started
2022-01-27T08:35:43.084Z [jfac] [WARN] [d84d2d549b318495] [o.j.c.ExecutionUtils:165] [pool-9-thread-2] - Retry 900 Elapsed 7.56 minutes failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
That shows that the server is not running properly, but doesn't give a clear idea of what to try next. Any suggestions?
2 things to check,
How is the artifactory.service file in the systemd directory
Whenever the OS is rebooted, what is the error seen in the logs, check all the logs.
Hint: From the warning shared, it seems that Router service is not able to start when OS is rebooted, so whenever OS is rebooted and issue comes up check the router-service.log for any errors/warnings.
After understanding how to add an ospd scanner, verify it etc ...
I though I could finally use it but got an error through UI to add it to a task.
In my case, I run OpenVAS 9 on a debian 9 and I'm trying to include a w3af scanner but I got the same issue with every OSP scanner I add.
my pip freeze :
ospd==1.2.0
ospd-debsecan==1.2b1
ospd-nmap==1.0b1
ospd-w3af==1.0.0
Note that here is an example of w3af but the issue is the same for debsecan scanner and nmap scanner.
my openvas-check-setup :
Step 1: Checking OpenVAS Scanner ...
OK: OpenVAS Scanner is present in version 5.1.1.
OK: redis-server is present in version v=3.2.6.
OK: scanner (kb_location setting) is configured properly using the redis-server socket: /tmp/redis.sock
OK: redis-server is running and listening on socket: /tmp/redis.sock.
OK: redis-server configuration is OK and redis-server is running.
OK: NVT collection in /usr/local/var/lib/openvas/plugins contains 47727 NVTs.
WARNING: Signature checking of NVTs is not enabled in OpenVAS Scanner.
SUGGEST: Enable signature checking (see http://www.openvas.org/trusted-nvts.html).
OK: The NVT cache in /usr/local/var/cache/openvas contains 47727 files for 47727 NVTs.
Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /usr/local/var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 47727 NVTs.
OK: At least one user exists.
OK: OpenVAS SCAP database found in /usr/local/var/lib/openvas/scap-data/scap.db.
OK: OpenVAS CERT database found in /usr/local/var/lib/openvas/cert-data/cert.db.
OK: xsltproc found.
Step 3: Checking user configuration ...
WARNING: Your password policy is empty.
SUGGEST: Edit the /usr/local/etc/openvas/pwpolicy.conf file to set a password policy.
Step 4: Checking Greenbone Security Assistant (GSA) ...
OK: Greenbone Security Assistant is present in version 7.0.2.
OK: Your OpenVAS certificate infrastructure passed validation.
Step 5: Checking OpenVAS CLI ...
OK: OpenVAS CLI version 1.4.5.
Step 6: Checking Greenbone Security Desktop (GSD) ...
SKIP: Skipping check for Greenbone Security Desktop.
Step 7: Checking if OpenVAS services are up and running ...
OK: netstat found, extended checks of the OpenVAS services enabled.
OK: OpenVAS Scanner is running and listening on a Unix domain socket.
OK: OpenVAS Manager is running and listening on a Unix domain socket.
OK: Greenbone Security Assistant is listening on port 443, which is the default port.
Step 8: Checking nmap installation ...
WARNING: Your version of nmap is not fully supported: 7.40
SUGGEST: You should install nmap 5.51 if you plan to use the nmap NSE NVTs.
Step 10: Checking presence of optional tools ...
OK: pdflatex found.
WARNING: PDF generation failed, most likely due to missing LaTeX packages. The PDF report format will not work.
SUGGEST: Install required LaTeX packages.
OK: ssh-keygen found, LSC credential generation for GNU/Linux targets is likely to work.
OK: rpm found, LSC credential package generation for RPM based targets is likely to work.
OK: alien found, LSC credential package generation for DEB based targets is likely to work.
OK: nsis found, LSC credential package generation for Microsoft Windows targets is likely to work.
To create the scanner in openvas, I use:
openvasmd --create-scanner="w3af" --scanner-host=127.0.0.1 --scanner-port=1235 --scanner-type="OSP" \
--scanner-ca-pub=/usr/local/var/lib/openvas/CA/cacert.pem \
--scanner-key-pub=/usr/local/var/lib/openvas/CA/clientcert.pem \
--scanner-key-priv=/usr/local/var/lib/openvas/private/CA/clientkey.pem
To run ospd-w3af scanner, I use:
~# ospd-w3af -b 127.0.0.1 -p 1235 -k \
/usr/local/var/lib/openvas/private/CA/clientkey.pem -c \
/usr/local/var/lib/openvas/CA/clientcert.pem --ca-file \
/usr/local/var/lib/openvas/CA/cacert.pem -L DEBUG
When I verify the scanner with openvasmd --verify-scanner xxxxx I got
Scanner version: 2018.8.22.
note: in the logs of the scanner I got this for every verify I do, I don't know if it's related or no and I didn't find a way to fix this:
2018-10-15 14:27:47,413 ospd.ospd: DEBUG: New connection from 127.0.0.1:60078
2018-10-15 14:27:49,430 ospd.ospd: DEBUG: Error: ('The read operation timed out',)
2018-10-15 14:27:49,433 ospd.ospd: DEBUG: 127.0.0.1:60078: Connection closed
So, my verification made, I want to create a task that uses this scanner but I can't save it due to error "Given scanner_type was invalid" :
https://i.stack.imgur.com/fvIJd.png
I got 0 connection to the chosen scanner at this moment and I can't find anything in the logs (maybe I can't search). I suspect the gsad UI being responsible for this but I can't find it.
I don't know what to do and if someone more expert than me (not very hard) could help that'd be great :)
Thanks in advance.
I solved this issue by creating a scan configuration for the ospd scanner (I though it didn't need one since it import them)
I faced another issue concerning ospd-w3af configuration, I couldn't create one because it needs ospd 1.0.0 installed, I modified the dependencies few days ago and it doesn't work with ospd 1.2.0
Now I'm facing the issue where the scans doesn't start properly. It stops at 1%
Getting openvas 9 running on new install of Ubuntu 18 was a pain. once i got past all my errors by creating files and ln -s for redis-server socks connections my tasks crapped out at 1%. My fix was install sudo apt install libopenvas-dev after that scans work and check-setup worked. Check-setup report no scanner but openvassd was running and openvasmd --verify-scanner (uuid) showed the scanner.
I am installing ICp 2.1.0.1 and I received an error at the TASK
[master: Waiting for MariaDB service to start] msg: The MariaDB
component failed to start.
After this msg the installation completed with failed status.
We are installing ICp with 3 Masters, 3 Proxies and 2 Workers. We have 1 IP for VIP master and 1 for VIP proxy.
I tried to install multiple times and all installations got the same error.
For prior issues with that error, the correct db admin password was not used. So check the db user and password to resolve issue.
Would you validate whether each master host was able to access port 3306 on the other hosts?
If you run with .. install -vv | tee -a install-log.txt, do you get additional details as well?
The error was solved by following the steps below.
Check whether kubelet is running:
Log in to your master node.
Run the following command to check kubelet status:
systemctl status kubelet
If kubelet is not running, run the following command to get the logs:
journalctl -u kubelet &> kubelet.log
We found the error in the kubelet.log log:
Error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
We found this troubleshoot in this link, and the solution at the ICP issue 4651.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/troubleshoot/etcd_fails.html
https://github.ibm.com/IBMPrivateCloud/roadmap/issues/4651