Clickhouse default http handlers not supported - http

I have been trying to run clickhouse on ec2 instance from terraform. So far the ec2 instance runs well and I have access to the http localhost:8123. However when I try to access the localhost:8123/play I get the following message:
There is no handle /play
Use / or /ping for health checks.
Or /replicas_status for more sophisticated health checks.
Send queries from your program with POST method or GET /?query=...
Use clickhouse-client:
For interactive data analysis:
clickhouse-client
For batch query processing:
clickhouse-client --query='SELECT 1' > result
clickhouse-client < query > result
I don't understand why this is happening as I was not getting that error when running in local.
When I check the status of the clickhouse server I get the following output:
● clickhouse-server.service - ClickHouse Server
Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
Mar 25 12:14:35 systemd[1]: Started ClickHouse Server.
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_compression
Mar 25 12:14:35 clickhouse-server[11774]: Logging warning to /var/log/clickhouse-server/clickhouse-server.log
Mar 25 12:14:35 clickhouse-server[11774]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_compression
I don't know if this will help but maybe it is related to the problem.(logs file are empty)
Another question that I have and that has nothing to do with the problem above, is about the understanding of how clickhouse works because we hear many different articles talking about clickhouse but none seem very clear to me. We often hear about "nodes" in the articles that I've been reading. So far I think that clickhouse works with servers on which we put clusters. Inside those clusters we put shards and in each of those shards we put replicas, the so called "nodes". As we will be running in production I just want to make sure that when we talk about "nodes" we are talking about container which act as compute units or it is completely something else.
So far I've tried to open all port ingress and egress but it did not fix the problem. I've checked the clickhouse documentation which mention custom http endpoint but none talk about this error.

Related

Self-hosted gitlab server with with RPi and pitunnel showing http error 413 when trying to push

1. Problem
The git push command returns the following error if one file is larger than ~1MB:
Pushing to http://mygitlabserver.pitunnel.com/root/my_project.git
POST git-receive-pack (1163897 bytes)
error: RPC failed; HTTP 413 curl 22 The requested URL returned error: 413
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
The server is an RPi 4 with an SSD attached. Accessed via pitunnel (standard subscribtion).
The push fails if one file is larger than 1MB
The push returns no error even if the commit is 150MB (a lot of small files)
The push returns no error if an mp3 file of multiple MBs gets pushed.
2. Problem
Not really a problem but it can be related to the other one
If a large project is imported that was exported from gitlab.com it returns the same error:
413 Request Entity Too Large
nginx/1.10.3 (Ubuntu)
But only if connected via pitunnel (link), it works if the project is uploaded in the local network.
The nginx seems to be the problem.
In the gitlab.rb file the following parameters are set and the gitlab service was restarted according to the gitlab docs:
nginx['enable'] = true
nginx['client_max_body_size'] = '900m'
PS: The repo will use git LFS after this problem is solved.
for all with similar a similair problem:
Pitunnel was the problem.

Pacemake not failover when nginx service down

I have setup HA-Cluster for nginx. So when nginx or node fail, then it will failover to second node.
pcs status Cluster name: push_noti_cluster Stack: corosync Current DC: push2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Tue Jul 31 11:29:16 2018 Last change: Tue Jul 31 09:20:05 2018 by root via cibadmin on push1
2 nodes configured 3 resources configured
Online: [ push1 push2 ]
Full list of resources:
virtual_ip (ocf::heartbeat:IPaddr2): Started push1 Clone Set: Nginx-clone [Nginx] Started: [ push1 push2 ]
Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled You have new mail in /var/spool/mail/root [root#server1 ~]#
Failover work fine when we stop cluster service using pcs cluster stop on either of these nodes or rebooting the servers.
What we want to achieve is to perform the resource failover when nginx on host node01 stop running and both the resources virtual_ip/webserver should failover to second host node02.
Is it possible to do a service level failover? I.e. when one of resource fails in one node (node01), all the configured resources (here virtual_ip/webserver) should failover to other node (node02).
From what you write, I see there is not configured that "active" node must be that node where present active nginx(any needed service).
Try to check your configuration with examples from this site.
https://wiki.clusterlabs.org/wiki/Example_configurations#Failover_IP_.2B_One_service

Can't establish connection over second NIC (two hops)

We are having trouble with network routing configuration in Ubuntu Xenial.
We have many servers with both Debian 8.4 (Jessie) and Ubuntu 16.04.2 (xenial)
and the exact same networking setup (or at least as far as we can see).
They all have two NICs attached to two VLANs (Say "A" and "B") both accessible
though other VLANs say, for example, from VLAN "C".
Both /etc/network/interfaces files are of the form:
NOTE: I faked names and IPs for the sake of better readability.
# VLAN A
auto eth0
iface eth0 inet static
address 192.168.111.xxx
netmask 255.255.255.0
broadcast 192.168.111.255
network 192.168.111.0
gateway 192.168.111.254
dns-nameservers 192.168.111.25 192.168.111.26
# VLAN B
auto eth1
iface eth1 inet static
address 192.168.222.xxx
netmask 255.255.255.0
broadcast 192.168.222.255
network 192.168.222.0
gateway 192.168.222.254 # <-- (Commented out in Ubuntu machine)
dns-nameservers 192.168.111.25 192.168.111.26
...say xxx is 100 for Debian Machine and 200 for Ubuntu machine and I'm
trying to ping from 192.168.1.10 in VLAN "C" to following addresses:
192.168.111.100: Works fine.
192.168.222.100: Works fine.
192.168.111.200: Works fine.
192.168.222.200: NO Answer!!
The "B" vlan is used mostly for backup and other "background" traffic to
avoid saturation problems in vlan "A".
I know that having two network paths to access same machine is not an usual
setup and I must say that only being able to connect thought one of them from
other networks is not a big problem nowadays. But what stucks to me is why
I can access to Debian Machines and not to Ubuntu ones?
Even, on the other hand, if it were working well in both platforms, we could
consider closing some services (such as ssh, and backend interfaces) from NIC
"A" to improve security (Our firewall only allows access to vlan "B" from our
IT staff vlan).
Of course, as it is commented in previous interfaces snippet, gateway
row is commented out in Ubuntu machines, but that is because, networking
initialization fails in that machines otherwise. That is, in fact, what we are
trying to solve.
But both machines routing tables are almost identical. The only difference
I could see was the onlink flag in the Ubuntu machine:
myUser#debianMachine:~$ sudo ip route
default via 192.168.111.254 dev eth0
192.168.111.0/24 dev eth0 proto kernel scope link src 192.168.111.100
192.168.222.0/24 dev eth1 proto kernel scope link src 192.168.222.100
myUser#ubuntuMachine:~$ sudo ip route
default via 192.168.111.254 dev eth0 onlink
192.168.111.0/24 dev eth0 proto kernel scope link src 192.168.111.200
192.168.222.0/24 dev eth1 proto kernel scope link src 192.168.222.200
...but I was able to remove it by following command:
myUser#ubuntuMachine:~$ sudo ip route replace default via 192.168.111.254 dev eth0
myUser#ubuntuMachine:~$ sudo ip route
default via 192.168.111.254 dev eth0
192.168.111.0/24 dev eth0 proto kernel scope link src 192.168.111.200
192.168.222.0/24 dev eth1 proto kernel scope link src 192.168.222.200
And it did'nt fix the problem.
After that, I also tried to uncomment gateway row of 'VLAN B' which, as I
said, it were commented out in /etc/network/interfaces file and tryed to
restart networking but this is what happened:
myUser#ubuntuMachine:~$ sudo /etc/init.d/networking restart
[....] Restarting networking (via systemctl): networking.serviceJob for networking.service failed because the control process exited with error code. See "systemctl status networking.service" and "journalctl -xe" for details.
failed!
...and the onlink flag came back again.
As a note, commenting out that line again and issuing new
/etc/init.d/networking restart command, the output is the same until the
machine is rebooted, (even networking, despite the VLAN B default gateyay
issue, continues working as usual).
Following are the output of suggested commands:
myUser#ubuntuMachine:~$ sudo systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/generator/networking.service.d
└─50-insserv.conf-$network.conf
Active: failed (Result: exit-code) since jue 2017-12-21 14:55:29 CET; 42s ago
Docs: man:interfaces(5)
Process: 8552 ExecStop=/sbin/ifdown -a --read-environment --exclude=lo (code=exited, status=0/SUCCESS)
Process: 8940 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 8934 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-envi
Main PID: 8940 (code=exited, status=1/FAILURE)
dic 21 14:55:29 ubuntuMachine systemd[1]: Stopped Raise network interfaces.
dic 21 14:55:29 ubuntuMachine systemd[1]: Starting Raise network interfaces...
dic 21 14:55:29 ubuntuMachine ifup[8940]: RTNETLINK answers: File exists
dic 21 14:55:29 ubuntuMachine ifup[8940]: Failed to bring up eth1.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILUR
dic 21 14:55:29 ubuntuMachine systemd[1]: Failed to start Raise network interfaces.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Unit entered failed state.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Failed with result 'exit-code'.
...and the meaningful part of sudo journalctl -xe:
dic 21 14:55:29 ubuntuMachine sudo[8922]: myUser : TTY=pts/0 ; PWD=/home/myUser ; USER=root ; COMMAND=/etc/init.d/networking restart
dic 21 14:55:29 ubuntuMachine sudo[8922]: pam_unix(sudo:session): session opened for user root by myUser(uid=0)
dic 21 14:55:29 ubuntuMachine systemd[1]: Stopped Raise network interfaces.
-- Subject: Unit networking.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit networking.service has finished shutting down.
dic 21 14:55:29 ubuntuMachine systemd[1]: Starting Raise network interfaces...
-- Subject: Unit networking.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit networking.service has begun starting up.
dic 21 14:55:29 ubuntuMachine ifup[8940]: RTNETLINK answers: File exists
dic 21 14:55:29 ubuntuMachine ifup[8940]: Failed to bring up eth1.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
dic 21 14:55:29 ubuntuMachine systemd[1]: Failed to start Raise network interfaces.
-- Subject: Unit networking.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit networking.service has failed.
--
-- The result is failed.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Unit entered failed state.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Failed with result 'exit-code'.
dic 21 14:55:29 ubuntuMachine sudo[8922]: pam_unix(sudo:session): session closed for user root
I googled a lot about being able to found some related information but none
fully answering my question:
An explanation of "onlink" flag that seemed to me it were pointing
out the possibilitity that the "onlink" flag were responsible of a
"wrong back routing" in the meaning that «tells the kernel that the it
does not have to check if the gateway is reachable directly by the
current machine» so (I figured out) the kernel may thought it could (or
should) route the answers of incomming connections from VLAN C to the
default gateway instead of thought the same NIC from where the
connection was started.
But, as I said, removing the "onlink" flag didn't seem to change
anything.
This unix StackExchange answer seems to solve the problem (I didn't
tested it yet) by using multiple routing tables and rules (to tell the
kernel which table to use). But it doesn't explain why Debian
machines are working well (I checked /etc/iproute2/rt_tables file of
both machines and they are identical too:
myUser#bothMachines:~$ sudo cat /etc/iproute2/rt_tables
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
#1 inr.ruhep
So my final hypothesis is that it could be just an implementation difference
between kernel versions and, having that ubuntu one is much more recent, this
could be the correct behaviour so, in modern kernels, I need to use two
different routing tables (but I'm not sure and don't know why...).
myUser#debianMachine:~$ sudo uname -a
Linux debianMachine 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux
myUser#ubuntuMachine:~$ sudo uname -a
Linux ubuntuMachine 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
And, hence, the question is:
Are we doing something wrong (or there is some bug in them) in the Ubuntu machines? Or, conversely, this is the correct behaviour and we are forced to setup more complex routing schema (either by per-vlan routes or by using two routing tables to make two default gateway's to work again)?
EDIT:
Now I tried to add static route to fix the problem:
myUser#ubuntuMachine:~$ sudo ip route add 192.168.1.0/24 via 192.168.222.254 dev eth1
...but that freezed my ssh connection (thought NIC A) even I could then connect thought NIC B (at 192.168.111.200)
Both rules at the same time seems to not being possible:
myUser#ubuntuMachine:~$ sudo ip route add 192.168.1/24 via 102.168.111.254 dev eth0
myUser#ubuntuMachine:~$ sudo ip route add 192.168.1/24 via 192.168.222.254 dev eth1
RTNETLINK answers: File exists
EDIT 2:
I finally found the Linux Advanced Routing & Traffic Control HOWTO which seems to be more accurate than all other documentation I found and specifically in its Chapter 4. Rules - routing policy database I see following text:
If you want to use this feature, make sure that your kernel is
compiled with the "IP: advanced router" and "IP: policy routing"
features
...so I thing all points to that my previous hypothesis of a kernel implementation difference was right and that difference is concretely is those two features being compiled in.
Not an authoritative answer, but my first working attempt (applying what I managed to understand):
sudo ip route add 192.168.1.0/24 via 192.168.222.254 from 192.168.222.200 dev eth1 table 253
sudo ip rule add from 192.168.222.200 table 253
Update: from and devarguments in the ip route command aren't required (it works perfetly well without them).
...after isuinng first command I couldn't connect yet, but after issuing second one yes.
The logic behind that comes from this text i found in this document:
Linux-2.x can pack routes into several routing tables identified by a number in the range from 1 to 255 or by name from the file /etc/iproute2/rt_tables By default all normal routes are inserted into the main table (ID 254) and the kernel only uses this table when calculating routes.
Actually, one other table always exists, which is invisible but even more important. It is the local table (ID 255). This table consists of routes for local and broadcast addresses. The kernel maintains this table automatically and the administrator usually need not modify it or even look at it.
In fact, I finally ended up using another routing table, identified by its id (253) instead of what I now understand it is just an alias (defined in /etc/iproute2/rt_tables file).
...and checking again that file, I now see that there was an alias ("default") already defined for that routing table (next to the "main" one which is indeed 254 as the text fragment I pasted previously says.
What I don't know yet is which is the logic behind this naming (the "default" for 253 routing table I mean) and if, for any reason, is better to use lower routing tables (1, 2, 3...) like this solution (already mentioned in the question) does.
But, for the sake of simplicity, if we aren't going to build complex routing policies and just want to fix this connectivity issue, I guess it could be a good solution to use something like (not yet tested):
gateway 192.168.222.254 table 253
post-up ip rule add from 192.168.222.200 table 253
I still need to test and check if I need an additional via 192.168.222.254 in the gateway row or if it won't work at all and need to add it with another post-up command instead.
I will update this answer with the results.
Edit 1: Same works with default routes:
sudo ip route add default from 192.168.222.200 via 192.168.222.254 table 253
sudo ip rule add from 192.168.222.200 table 253
Edit 2: First (now fully¹) working approach
After playing for a while with a testing machine, I think that the best solution is to add following rows to the second NIC configuration in /etc/network/interfaces file:
gateway 192.168.222.254 table 1
post-up ip rule add from 192.169.222.200 table 1
pre-down ip rule del from 192.168.222.200 table 1
post-up ip route add 192.188.222.0/24 dev eth1 src 192.168.222.200 table 1
Comments:
Adding table 1 to the gateway keyword worked well so additional (less readable) post-up command to add that default route was not necessary.
...in fact, using specific table (other than main) for first NIC together with a similar rule than what we used for our second NIC would be a bad idea because, that that rule will only apply when 192.168.111.200 is going to be used as source address so there will not be any "default default gateway". Leaving first NIC configuration in the main routing table, will make all ("locally generated") outgoing connections to remote LANs will go though our first default gateway by default.
First post-up command adds a rule that packets with the source address of that NIC, should be routed using table 1 (otherwise our new default gateway wouldn't be used).
pre-down command removes that rule. It is not mandatory but, without it, multiple network service restarts will duplicate this rule every time.
I also tried to use dev eth1 instead of from 192.169.222.200 (to avoid having to duplicate network address) but it didn't work. I guess which NIC to use to for "response" packets were "not yet decided".
I used table 1 for eth1 (our second NIC) and I could use table 2 for an eventual third one and so on. It wasn't needed to specify any table/rule for first NIC because it comes to the main table (not "default": see below note).
Finally(¹) the second post-up command make all things work well because (as I now realize) only (first matching) one routing table is used so the default network route (automatically created when the interface brought up) doesn't apply because it was created in table main.
I still don't know if there is a way to force it to be crated directly into table 1.
NOTE: By command sudo ip rule list we can see current routing rules as follows:
0: from all lookup local
32765: from 192.168.222.200 lookup 1
32766: from all lookup main
32767: from all lookup default
As I can understand, they are added decreasingly from 32767 to 0 and tried
increasingly until one matches. Last two ones and the "0" were already
defined by default. The former because of the logic I previously cited
from this document but that documents says that rules starts from "1"
so I guess "0" should also be some predefined "default starting point".
Edit 3:
As I said in the Edit 2 (of the question), I found this Linux Advanced Routing & Traffic Control HOWTO that helped me a lot in clarifying things.
Concretely the Routing for multiple uplinks/providers chapter was very useful to me in the task of understanding setups having "network loops" (even in our case we aren't acting as a router to Internet).

Japserserver HTTP 404 error

Could someone please help me to figure out this issue.I have installed the japserserver trial version for windows xp.The tomcat server seems to work fine.But when I try to connect to the japserserver i get a HTTP 404 error.
SEVERE: A web application appears to have started a TimerThread named
[adhocCache] via the java.util.Timer API but has failed to stop it.
To prevent a memory leak, the timer (and hence the enter code here associated thread)
has been forcibly cancelled.
Jul 15, 2013 4:22:04 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Server startup in 30926 ms
In the localhost file under tomcat logs i could also see some error as:
Caused by: net.sf.ehcache.config.InvalidConfigurationException: There is one error in your configuration:
* CacheManager configuration: You've assigned more memory to the on-heap than the VM can sustain, please adjust your -Xmx setting accordingly
`

No log files to process endeca

I want to create a diary report with endeca, so I have log server running at 15010 [port], but when I start [WeeklyReportGenerator] seems something is wrong I think because I have an error with log server, I check log and this is error:
Oct 12, 2012 10:19:17 AM com.endeca.forge.base.Pipeline$Engine$1 handle
WARNING: Error in pipeline: No log files to process
Oct 12, 2012 10:19:17 AM com.endeca.rg.components.input.FileSystemMultiInput$Engine$Statistics log
INFO: LogFileInput/FileSystemInput/com.endeca.rg.components.input.FileSystemMultiInput: Progress: 1/1 (100%), 0:00:00 remaining
Oct 12, 2012 10:19:17 AM com.endeca.rg.ReportGenerator main
SEVERE: Unable to proceed
Pipeline execution interrupted by exception
No log files to process
java.lang.RuntimeException: No log files to process
at com.endeca.rg.components.input.LogFileInput$Substitution$1$Engine.portClosed(LogFileInput.java:269)
Some clue about what is wrong?
The reporting processes need log files in order to produce reports. By default, no log messages are sent to the log server.
If you look at the orange reference app (http://:8006/endeca_jspref ) you'll see that it does implement logging. If you look at the logging_functions.jsp, you can see a good basic implementation of how to send log messages ( C:\Endeca\ToolsAndFrameworks\11.1.0\reference\endeca_jspref\logging_functions.jsp )
If you're using the Assembler API, it will handle most logging for you. Make sure you have the correct hostname and port configured. If you need to extend or replace the logging, look for the com.endeca.infront.navigation.event.LogServerAdapter in the assembler-context.xml.

Resources