I wanna have two Artifactory repositories,one on the server that is connected to the internet and can download Artifacts,and another that doesn't have internet,and its connected to the server that have internet,and many client that are connected to the server that doesn't have internet
how can I configure them?!
Just use the disconnected instance as a proxy to the connected one (declare it as a remote repository).
On the instance connected to the internet you should configure
remote repositories which serves as a 'proxy' to various public
repositories you would like to use. The remote repositories will
cache the artifacts downloaded from public repositories.
After you have the remote repositories configured, you can aggregate them into a single virtual repository. You can use the 'remote-repos' repository which you should already have.
On the disconnected instance create a remote repository pointing at the virtual repository you created on the connected instance (for example 'remote-repos')
Clients should be configured to resolve artifacts from the instance on the disconnected server
Related
I have setup a cluster within kubernetes using jgroups and the cluster appears to form correctly, each node has a local ip and a public ip, when I connect to one of the nodes using the public ip all is fine but the list of available nodes that is returned to the client (wildfly instance) contains the local ips of the nodes rather than their public ones, I have defined the connector with the public ip
<connectors>
<connector name="netty-connector">tcp://{public ip}:61616</connector>
</connectors>
and then configured the broadcast as
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>activemq_broadcast_channel</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
and then configured the discvery as
<discovery-groups>
<discovery-group name="my-discovery-group">
<jgroups-file>jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>activemq_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
and finally the cluster as
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
Whenever I force a node to shutdown the client reconnects but fails and reports the local ip of the node, I was under the impression that the connector defined in the broker was used to broadcast to other members of the cluster but it uses the local ip is that correct?
Wildfly runs and send and receives messages but every few minutes I get the following log
14:27:31,463 WARN [org.apache.activemq.artemis.service.extensions.xa.recovery] (Periodic Recovery) AMQ122015: Can not connect to XARecoveryConfig [transportConfiguration=[TransportConfiguration(name=, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?trustStorePassword=****&port=61616&sslEnabled=true&host=x-x-x-x&trustStorePath=client-ts], discoveryConfiguration=null, username=username, password=****, JNDI_NAME=java:/RemoteJmsXA] on auto-generated resource recovery: ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ119007: Cannot connect to server(s). Tried with all available servers.]at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:797)
at org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.connect(ActiveMQXAResourceWrapper.java:311)
at org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.getDelegate(ActiveMQXAResourceWrapper.java:239)
at org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.recover(ActiveMQXAResourceWrapper.java:69)
at org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.recover(ActiveMQXAResourceWrapperImpl.java:106)
at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryFirstPass(XARecoveryModule.java:634)
at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:226)
at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:171)
at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:770)
at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:382)
This is the expected behavior as you are connecting through a load balancer. You can work around that by setting useTopologyForLoadBalancing=false and specifying servers explicitly in your connection URL.
When using WildFly, the connection factory or pooled connection factory must be configured with the attribute use-topology-for-load-balancing set to false. This is how to set this from the CLI (replace remote-artemis with your actual name):
/subsystem=messaging-activemq/pooled-connection-factory=remote-artemis:write-attribute(name=use-topology-for-load-balancing, value=false)
Got it working eventually by creating a service per pod and putting public ip in the connector definition for each node
We do have one server [Windows Server 2016] and i want to monitor that server, by installing Wazuh Tool.
I saw the documentation, but still i am getting confused. Should i need to install,
Wazuh Server
Wazuh Agent
Kibana
in server.? I don't see any article related to installing Wazuh Server in Windows Machine.
After following up the wazuh documentation, i can able to go up to a certain limit.
Installed Virtual Box in Windows Server.
Downloaded Wazuh OVA file and imported the same into virtual box.
Now i can able to connect to Wazuh Server, using the default credentials.
Now i stuck up at one place. I need to get the IP. I tried with 'Ip addr' command. But still, it is showing 127.0.0.1/8
As far as i checked, it is creating some dynamic IP's. Is there a way to setup Static IP. So that, i can able to access Wazuh Web console
through that IP.
Some of my findings:
It seems that the eth0 network interface for the VM does not have an IPv4 address assigned to it.
In the video in the documentation when running 'ip addr' it shows a dynamic IPv4 address as well as the IPv6 address so I suspect that this is the reason you cannot access the web console. This could be caused by the type of network interface you created for the VM in virtual box.
-------- Edited----------
As per your guidence, i did the following things.
Wazuh Server:
Virtual Box -> Adapter 1 -> Bridged Adapter
Virtual Box -> Adapter 2 -> Host-only Adapter
Started the Virtual Box and checked the 'Ip addr' command. Got the following IP's, eth0 [192.168..] and eth1 [10.0..]
In browser, i tried https://192.168.. and i can able to login to kibana.
Wazuh Agent:
The server which ever i am going to monitor, i installed Wazuh Agent. In the Wazuh Config file, i need to specify
Here i am bit confused. Should i need to give the actual server IP [where the wazuh server is] or i need to specify the IP's which i am getting in 'Ip Addr' command.?
I have tried all the IP's. When i check the Logs, it is showing like,
start_agent.c:100 at connect_server(): ERROR: (1216): Unable to connect to 'xx.xx.xx.xxx': 'Bad file descriptor'.
I recommend you reading the Architecture guide for a better understanding of how Wazuh works. Its architecture is based on agents, which means you need to install Wazuh agent on those endpoints you want to monitor (for example, your Windows server), and then connect these agents to a Wazuh Manager server (which need to be installed in a Linux machine, so you will need another server).
Kibana/Splunk are optional and useful tools to index the data generated by the manager for better visualization. I recommend using Kibana and the Elasticsearch Stack.
For the Linux Wazuh Manager server I recommend trying the all in one deployment, or, if you will have few agents connected and doesn't want to deploy any instance from scratch, you could try the pre-built Virtual Machine appliance (OVA)
I hope this helps you. The best point to start using Wazuh is the Getting started guide. I recommend you read that first of all.
------------------------ edit --------------------
Hello,
I'm sorry if I weren't clear enough. Wazuh has two main components: Manager (server in the documentation) and Agent.
The manager is also called a server because it serves the Wazuh service itself. That means the part of Wazuh that analyzes security events and generates alerts.
But Wazuh agent (despite its name) is also installed on servers that you want to monitorize and it is used to send security events to the Wazuh Manager (server) so they could be analyzed.
That said, if you want to correctly monitorize a Windows server you need to install the Wazuh Windows agent on it because it is designed to monitorize Windows servers. And you need to connect this agent to a Wazuh server. Here, you have different options:
You could install the Wazuh Manager in another (Linux) server.
You could install docker and docker-compose on your Windows server and use the wazuh-docker GitHub repository to deploy a Wazuh manager stack (with Wazuh, Elasticsearch and Kibana) to connect you, agent, to.
You could install the Wazuh OVA (VM appliance) on Virtualbox or similar software (this Virtual machine has installed by default Wazuh Manager, Elasticsearch and Kibana as well).
I see that you're trying with the 4th, deploying the Wazuh OVA on Virtualbox. Nevertheless, remember that you must have to install the Windows agent as well and connect it to the Wazuh Manager.
Regarding the IP question. My advice here is to enter the VirtualBox configuration for the machine and set up two network interfaces (or adapters). One host-only adapter (which will have a static IP that you could use to connect from your local browser) and other with a bridged adapter (to connect to the internet). Then, I recommend using nmtui (a console user interface for network manager) to set up your static IP as in the attached capture. That should be enough.
I'm trying to debug an issue with a Service Fabric node, and I want to RDP into the node in order to read internal log data.
However, I'm deploying to a local cluster, and I access my development machine via RDP. If I try to RDP into localhost from my development machine I of course see you already have a console session in progress, I'm already connected to this machine...
How can I remote desktop into a locally-running service fabric node when RDP is running on the host machine?
I have a question regarding implementation in secured environment:
I have NAC and Nexus+repo in same environment and NES and agents in secured environment (over FW). NAC is connected one way to NES as stated in documentation
Is there an need to Open a connection between the NES and the NEXUS repo ?
or not ? I did not find any documentation on that ...
What are the best practices to deploy to agents over FW ?
Thanks.
You can find the list of ports for communication between NAC, NES and other tools for Nolio environment. Those ports should be open for communication, by adding an exception in your firewall rules.
The NES doesn't need to communicate with the nexus repository (Its main job is to serve as a proxy between the NAC and the agents)
Note:
The NAC connects to the repository to sync its action packs and store manifests
Agents that serve as artifact retrieval agents for artifacts that are set to be stored in the internal repository, need to be able to connect to the repository
I'm not sure what is FW
In our project we create a artifact type under artifact management and define it as zip/tar. Now if you can access the artifact over HTTP, that can be provided as the artifact definition. There is also an option to download the artifact and save in Nolio if you check the check box.
The application i'm currently working on start to enter in a pre-release phase.
In this phase, the server-side application components are to be deployed on Amazon VMs while the client-side application remains on the user machine.
This applications connects to server using JNDI and RMI to call remote EJB methods. This works well on localhost and local network.
But, when trying to connect to Amazon host, the application hangs up on context.lookup method. that's to say a JNDI context can be obtained from this remote server, but no lookup can be performed on that context.
What can I do to obtain good diagnostic on the failure ?
Are there logs that can be generated for the RMI handshake/whatever ?
Is there any way to see on server side if query really drive its way through the internet to the server ?
Also notice I've already enabled public IP usage on my Glassfish server (using recommended Oracle procedure).
EDIT According to a fast TCP capture on server, it seems that server receives the client context query with in-lan client address, which it of course isn't aware of :
query is
[3/27/2012 11:05:22 AM:169]
GIOP.......(................NameService....._is_a...................
NEO................ª.......(IDL:omg.org/SendingContext/CodeBase:1.0.
...........n........172.27.63.145.ܺ....¯«Ë........e................
........... ................... ... ...........&...............(IDL:
omg.org/CosNaming/NamingContext:1.0.
reply is
[3/27/2012 11:05:22 AM:171]
GIOP.......2............NEO................0.......(IDL:omg.org/Send
ingContext/CodeBase:1.0............ô........46.137.114.67.'5....¯
«Ë........d........................... ................... .........
.....&...........!...|...............$....f............10.241.42.208
.'6.#........g..............g........default...................g....
...........+IDL:omg.org/CosNaming/NamingContextExt:1.0..............
.......10.241.42.208.'5...M¯«Ë.... ...d... S1AS-ORB............Root
POA....TNameService............................... .................
.. ... ...........&......
(as read using SmartSniff ASCII output).
The IP in query (172.27.63.145) is my IP in my company LAN. From what I understand of communication over itnernet, it should be my company LAN public IP, no ? How can I make Glassfish client udnerstand it should use that IP ?
Diagnostic has been clearly obtained : the client, which connects from a LAN to the server, sends its own internal network private address to server, which server can't forward any answer to. As a consequence, server doesn't answer, hence the hangup.