How to bind Artifactory to localhost only? - artifactory

According to Artifactory documentation,
For best security, when using Artifactory behind a reverse proxy, it must be co-located on the same machine as the web server, and Artifactory should be explicitly and exclusively bound to localhost.
How can I configure Artifactory so that it is bound to localhost only?

As of Artifactory version 7.12.x, there are two endpoints exposed for accessing the application:
Port 8082 - all the Artifactory services (UI + API) via the JFrog router
Port 8081 - direct to the Artifactory service API running on Tomcat (for better performance)
The JFrog Router does not support specific binding configuration today.
Tomcat can controlled with setting a custom address="127.0.0.1" on the relevant connector.
Your best bet would be to simply close all ports on the server running your Artifactory and allow only entry to the web server's port. This is best practice anyway for security aware systems.
IMPORTANT:
If using other JFrog products like JFrog Xray or JFrog Pipelines, they rely on direct access to the Artifactory router, so your security rules should take that into consideration.
You can find a map of all JFrog platform ports on the official Wiki page.

Related

Euca 5.0 Enable SSL with Combined CLC and Cluster Controller?

I have completed an automated ansible install and have most of the wrinkles worked out.
All of my services except Nodes are running on a single box on non-secure HTTP though I specified 443 in my inventory I see now that does not imply an HTTPS configuration. So I have non-secure API endpoints listening on 443.
Is there any way around the requirements of operating CLC and Cluster Controller on different hardware as described in the SSL howto: https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/bps/configuring_ssl/
I've read that how-to and can only guess that installing certs on the CLC messes up the Cluster Controller keys but I don't fully grasp it. Am I wasting my time trying to find a workaround or can I keep these services on the same box and still achieve SSL?
When you deploy eucalyptus using the ansible playbook a script will be available:
# /usr/local/bin/eucalyptus-cloud-https-import --help
Usage:
eucalyptus-cloud-https-import [--alias ALIAS] [--key FILE] [--certs FILE]
which can be used to import a key and certificate chain from PEM files.
Alternatively you can follow the manual steps from the documentation that you referenced.
It is fine to use HTTPS with all components on a single host, the documentation is out of date.
Eucalyptus will detect if an HTTP(S) connection is using TLS (SSL) and use the configured certificate when appropriate.
It is recommended to use the ansible playbook certbot / Let's Encrypt integration for the HTTPS certificate when possible.
When manually provisioning certificates, wildcards can be used (*.DOMAIN *.s3.DOMAIN) so that all services and S3 buckets are included. If a wildcard certificate is not possible then the certificate should include the service endpoint names if possible (autoscaling, bootstrap, cloudformation, ec2, elasticloadbalancing, iam, monitoring, properties, route53, s3, sqs, sts, swf)

wos2 active-active all in one config load balancer with IIS not NGINX

I have 3 services. One service is for wso2 database. I am using wso2 pattern 2 (active-active all in one) so I have wso2-am 2.6.0 and wso2-am-analytics 2.6.0 in both remained services. I want these two services work active-active.
I configure my services according to this .in part2 configure load balancer: I don't want to use NGINX and I want to use IIS. I try to config IIS in my local according to this . So I define two services (IP of my services) in service frim of IIS. I am not sure that I config right and I don't know how test it. how host names of wso2 set in IIS?
Is it possible to use IIS instead of NGINX for load balance support in wso2?

Does artifactory provide certificate based authtication for docker registry

I am looking about certificates based authentication instead of username and password in artifactory docker registry logging.
The answer is that Artifactory don't have this capability, however, as you will have to implement reverse proxy in front of Artifactory for working with Docker you might be able to implement this capability there.

WSO2 ESB 4.8.1 Clustering

Is it possible to create one ESB node as a dual role as a worker and manager ?
I'm using wso2 ESB 4.8.1 and nginx as load balancer.
This is pretty easy. This is what you have to do.
Forget about nginx and setup the ESB cluster. Lets say a cluster with one manager and one worker. I think you will be able to get it done by following the instructions here. Instead of WSO2 ELB mentioned in the doc, you are going to use nginx. Instead of the ELB, You can set the management and worker node as the well known members. i.e. In both nodes, you set both nodes as the well known members.
Once you have the cluster working, you should be able to send requests to an artifact deployed to both nodes separately. Difference between the manager node and worker node is, manager node is the one who only commits to the svn repo. So, when you deploy new artifacts you should deploy them using the manager node.
Now you have to configure two sites in nginx. Lets assume you decided to use esbmgt.mydomain.com for the management node and esb.mydomain.com for the worker. In esbmgt's upstream, you only mention about the manager node and also you route the requests to the 9443 port of the node. In the esb's upstream, you mention both nodes and the requests are routed to 8280 (http) and 8243 (https). Thats because the ESB serves requests using those ports and the UI is exposed via 9443 (https)
I hope the above information will help you.

Configure OpenStack nova with remote Bind Server

How can we configure OpenStack to use and dynamically update remote Bind DNS Server.
This is not currently supported. There is a DNS driver layer, but the only driver at the moment is for LDAP backed PowerDNS. I have code for dynamic DNS updates (https://review.openstack.org/#/c/25194/), but have had trouble getting it landed because we need to fix eventlet monkey patching first.
So, its in progress, but you probably wont see it until Havana is released.
OpenStack relies on dnsmasq internally.
I am not aware of any way integrate an external bind server. Or plans to do that. Or even a reason to do that.
Check out Designate (https://docs.openstack.org/developer/designate/)
This could be what you are looking for:
Designate provides DNSaaS services for OpenStack:
- REST API for domain & record management
- Multi-tenant support
- Integrated with Keystone for authentication
- Framework in place to integrate with Nova and Neutron notifications (for auto-generated records)
- Support for PowerDNS and Bind9 out of the box

Resources