Connecting Apache Consumer to a single node on a Kafka Cluster - networking

I have a Kafka cluster on a private LAN, I want to have a consumer access the data on a different LAN, due to network restrictions, I can only access the main IP address (no DNS) of the cluster, let's call it master-node.
My consumer connects to the cluster without a problem, but the cluster instructs the consumer to fetch data from node1, node2 and node3, which I do NOT have network access to.
Is there a way to ask the master-node to gather the data on behalf of my consumer?

Consumers connect directly to the individual brokers which are leaders for individual partitions. This is to provide high scalability. By funnelling all traffic through a single endpoint, you are introducing a single point of failure.
If you need such a "proxy", then only option I am aware of would be the Kafka REST Proxy, and then you would have to consume and produce over HTTP rather than native Kafka clients.

Related

How to use one external ip for multiple instances

In GCloud we have one Kubernetes cluster with two nodes, it is possible to setup all nodes to get the same external IP? Now we are getting two external IP's.
Thank you in advance.
The short answer is no, you cannot assign the very same external IP to two nodes or two instances, but you can use the same IP to access them, for example through a LoadBalancer.
The long answer
Depending on your scenario and the infrastructure you want to set up, several ways are available to expose different resources through the very same IP.
I do not know why you want to assign the same IP to the nodes, but since each node it is a Google Compute Engine instance you can set up a Load Balancer (TCP, SSL, HTTP(s), internal, ecc). In this way you reach the nodes as if they were not part of a Kubernetes cluster, basically you are treating them as Compute Engine instances and you will able to connect to any port they are listening on (for example an HTTP server or an external health check).
Notice that you will be not able to connect to the PODs in this way: the services and the containers are running in a separate software bases network and they will be not reachable if not properly set, for example with a NodePort.
On the other hand if you are interested in making your PODs running in two different kubernetes nodes reachable through a unique entry point you have to set up Kubernetes related ingress and load balancing to expose your services. This resources are based as well on the Google Cloud Platform Load Balancer components, but when created they trigger as well the required change to the Kubernetes Network.

OpenStack Compute-node communicate/ping vms run on it

In Ceilometer, when pollsters collect meter from VMs, it used hypervisor on compute-node. Now, I want to write new plugin for ceilometer and not use hypervisor to collect meter, I want to collect meter by a service that is installed on VMs (mean ceilometer get data from service), so I need compute-node must communicate with VMs by IP (private IP). Is there any solution to do this?
Thanks all.
In general the internal network used by your Nova instances is kept intentionally separate from the compute hosts themselves as a security precaution (to ensure that someone logged into a Nova server isn't able to compromise your host).
For what you are proposing, it would be better to adopt a push model rather than a pull model: have a service running inside your instances that would publish data to some service accessible at a routeable ip address.

Openstack with neutron on two physical nodes

We have two physical system(ubuntu14.04.2) having 2 physical NIC each.
Is it possible to install openstack(juno) with neutron on same ?
Official documentation says that we need 3 nodes with network node having 3 NICs
Any help would be greatly appreciated.
Thanks,
Deepak
You can install all of OpenStack on a single system for development and testing purposes. Given that a single node installation is possible, it should follow that a two-node installation is also possible (and it is).
The documentation recommends three NICs because this leads to the simplest configuration. However, you can run a network host with two NICs. There are several different traffic types you'll be dealing with:
Public web (Horizon) traffic
Public API traffic (if you expose the APIs)
Internal API traffic
Tenant internal network traffic (traffic between Nova instances and the compute host)
Tenant external network traffic (traffic between Nova instances and "the rest of the world")
Storage (transferring Glance images, iSCSI for Cinder volumes, etc)
Being able to segment these in a meaningful fashion can lead to a more manageable and more performant environment. With only two NICs, you are probably looking at one for "internal traffic" (interal api, storage, tenant internal networking, etc) and one for "external traffic" (dashboard, public apis, tenant external traffic). This is certainly possible, but it means, for example, that excessive traffic from your tenants can impact access to the dashboard, and that a high volume of storage traffic can impact access to Nova instances.
If/when your environment grows beyond two nodes, you may want to investigate adding additional NICs to your configuration.

Can Kafka brokers receive data without topics being created?

I am trying to send data from a router to the kafka brokers. Since a router can only be configured in a way that it knows the IP and port number of the Kafka server.
I do not want to introduce a layer of java to consume messages from the router because it will cause latency.

How to filter connection in DB2 as Oracle does?

I have talked with some Oracle DBAs, and they told that there is a component called Oracle Listener ( http://docs.oracle.com/cd/B19306_01/server.102/b14196/network.htm ) that permits to filter the network traffic to a database, for example when the machine has many network interfaces.
Is there any tool similar in DB2? or how can I do this? because I just can configure one port per instance, and that is all. If I want to configure more, I have to do it via IPTables of firewall. However, I cannot configure which users, applications, or workload should connect which network interface.
A database server is generally not a good place to implement and manage a complicated networking scheme. From a CPU standpoint, you'd be better off delegating that responsibility to more specialized equipment (switches, routers, firewalls, etc.), and saving precious CPU cycles for database query processing instead. Running a simple, straightforward network configuration on the database server will also make it easier to secure your databases, because fewer administrators will require root access on the database server when it doesn't require regular attention from network admins.
Although the DB2 instance listens on just one TCP port (specified by the DBA), it will listen on that port on multiple network adapters and multiple IP addresses defined on those adapters. The instance will also listen on other network protocols that you've specified via the DB2COMM registry variable. Nothing at the DB2 configuration level controls which local NICs and/or IP addresses are allowed to accept inbound DB2 connection requests. However, when such granularity is needed, it's best to handle that from a dedicated firewall or router rather than a copy of iptables running locally.
I can't think of a reason that DB2's policy of one TCP port number per DB2 instance should be treated as a limitation. Even if DB2 allowed it (or could be tricked into doing so), listening on additional ports wouldn't accelerate response times for establishing a database connection, nor would it provide the instance with any more bandwidth than it already had. Increasing the number of agents/threads would change the performance characteristics, but none of those actions require the instance to listen on more than one TCP port. It would help if I understood the nature of your current (or anticipated) problem that stems from this policy.
If some of your questions are based on concerns about a NIC being a single point of failure, you may want to look into ethernet bonding, which creates the appearance of a single logical NIC from a pair of physical NICs. This is handled by networking features of the operating system, effectively hiding the complexity from database servers and other networked applications.
Network adapters in most servers now operate at gigabit speeds or faster, which all but eliminates the risk of the NIC being saturated by legitimate database traffic. If your DB2 application workload really is pushing the gigabit per second boundary all by itself, then congratulations, your organization is probably getting enough value out of the database to consider clustering it across multiple physical servers (InfoSphere Warehouse or DB2 pureScale, depending on the workload). If you're occasionally encountering network contention on the DB2 server that is caused mostly by other traffic, such as network-attached storage or network-based backups, that traffic can be isolated to specific NICs and away from the DB2 clients through network addressing techniques and some routing/switching hardware.
There is another approach to restrict connections to the database using CONNECT_PROC database configuration parameter. You just have to create a Stored procedure without parameters, and added in this configuration parameter.
The stored procedure will allow or deny the connection by retrieving info from the environnement.
For more information please check this paper: http://www.ibm.com/developerworks/data/library/techarticle/dm-1305db2access/index.html

Resources