I am basically a configuration management and automation expert . I have got a task to create weblogic domains for SOA silently . The document that I have to follow is
http://docs.oracle.com/cd/E23943_01/core.1111/e12036/create_domain.htm#SOEDG532
I created the domain silently using a response file however oracle confirmed that JDBC cannot be configured this way and only wlst scripts can be used.
This is what I need to get using wlst script
products
Basic WebLogic Server Domain
Oracle Enterprise Manager
Oracle WSM Policy Manager
Oracle JRF
Component schema
component schema - OWSM MDS Schema
schema owner - DEV_MDS
username - abc
password - xyz
service listener - XXYYZZ
port - 1554
Admin Server
Name: AdminServer
Listen Address: enter ADMINVHN.
Listen Port: 7001
SSL listen port: N/A
Managed Servers
WLS_WSM1 xx.xx.xx.xx 7010
WLS_WSM2 xx.xx.xx.xx 7010
Cluster
WSM-PM_Cluster
Assign WLS_WSM and WLS_WSM2 to WSM-PM_Cluster
Node MAnagers machines
SOAHOST1 xx.xx.xx.xx
SOAHOST2 xx.xx.xx.xx
ADMINHOST xx.xx.xx.xx
Assign Servers to machines
SOAHOST1: WLS_WSM1
SOAHOST2: WLS_WSM2
ADMINHOST: AdminServer
Target Deployments to Clusters or Servers, the wsm-pm application is targeted to the WSM-PM_Cluster only.
All other deployments are targeted to the AdminServer.
All JDBC system resources should be targeted to both the Admin Server and WSM-PM_Cluster.
Can some one help me with a sample script? I can build upon that . I have no experience of wlst.
I am in the same situation. The better script that a found is in that tutorial: http://middlewaresnippets.blogspot.co.uk/2014/08/set-up-12c-soabpm-infrastructure.html
This guy automatize everything with linux bash... so some times the problem is understand what he is doing... I hope this help you
If you found a better solution let me know
I found quite reasonable set of Chef cookbooks for WLS in this GitHub repository.
Perhaps you will find it useful.
Best regards,
Jarek
You can look at this WLST based SOA domain creation, which was excecuted on latest version. Blog link
You can use these scripts, they combine Ansible and WLST to create a SOA domain.
https://github.com/textanalyticsman/ansible-soa
Furthermore, you should try this one , which combines Weblogic Deploy Tooling and Ansible.
https://github.com/textanalyticsman/ansible-soa-wldt
Related
I have completed an automated ansible install and have most of the wrinkles worked out.
All of my services except Nodes are running on a single box on non-secure HTTP though I specified 443 in my inventory I see now that does not imply an HTTPS configuration. So I have non-secure API endpoints listening on 443.
Is there any way around the requirements of operating CLC and Cluster Controller on different hardware as described in the SSL howto: https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/bps/configuring_ssl/
I've read that how-to and can only guess that installing certs on the CLC messes up the Cluster Controller keys but I don't fully grasp it. Am I wasting my time trying to find a workaround or can I keep these services on the same box and still achieve SSL?
When you deploy eucalyptus using the ansible playbook a script will be available:
# /usr/local/bin/eucalyptus-cloud-https-import --help
Usage:
eucalyptus-cloud-https-import [--alias ALIAS] [--key FILE] [--certs FILE]
which can be used to import a key and certificate chain from PEM files.
Alternatively you can follow the manual steps from the documentation that you referenced.
It is fine to use HTTPS with all components on a single host, the documentation is out of date.
Eucalyptus will detect if an HTTP(S) connection is using TLS (SSL) and use the configured certificate when appropriate.
It is recommended to use the ansible playbook certbot / Let's Encrypt integration for the HTTPS certificate when possible.
When manually provisioning certificates, wildcards can be used (*.DOMAIN *.s3.DOMAIN) so that all services and S3 buckets are included. If a wildcard certificate is not possible then the certificate should include the service endpoint names if possible (autoscaling, bootstrap, cloudformation, ec2, elasticloadbalancing, iam, monitoring, properties, route53, s3, sqs, sts, swf)
I'm very new to Docker (in fact I've been only using it for one day) so maybe I'm misunderstanding some basic concept but I couldn't find a solution myself.
Here's the problem. I have an ASP.NET Core server application on a Windows machine. It uses MongoDB as a datastore. Everything works fine. I decided to pack all this stuff into Docker containers and put it to a Linux (Ubuntu Server 18.04) server. I've packed mongo to a container so now its PUBLISHED IP:PORT value is 192.168.99.100:32772
I've hardcoded this address to my ASP.NET server and also packed it to a container (IP 192.168.99.100:5000).
Now if I run my server and mongo containers together on my Windows machine, they work just fine. The server connects to a container with the database and can do whatever it needs.
But when I transfer both containers to Ubuntu and run them, the server cannot connect to the database because this IP address is not available there. I've beed googling for a few hours to find a solution and still I'm struggling with it.
What is the correct way to go about thes IP addresses? Is it possible to set an IP that will be the same for a container regardless of environment?
I recommend using docker-compose for the purpose you described above.
With docker-compose, you can access the database via a service name instead of an IP (which potentially is not available on another system). Here two links to get started
https://docs.docker.com/compose/gettingstarted/
https://docs.docker.com/compose/compose-file/
Updated answer (10.11.2019)
Here a concrete example for your asp.net app:
docker-compose.yaml
version: "3"
services:
frontend:
image: fqdn/aspnet:tag
ports:
- 8080:80
links:
- database
database:
image: mongo
environment:
MONGO_INITDB_DATABASE: "mydatabase"
MONGO_INITDB_ROOT_USERNAME: "root"
MONGO_INITDB_ROOT_PASSWORD: "example"
volumes:
- myMongoVolume:/data/db
volumes:
myMongoVolume: {}
From the frontend container, you can reach the mongo db container via the service name "database" (instead of an IP). Due to the link definition in the frontend service, the frontend service will start after the linked service (database).
Through volume definition, the mongo database will be stored in a volume that persists independently from the container lifecycle.
Additionally, I assume you want to reach the asp.net application via the host IP. I do not know the port that you expose in your application so I assume the default port 80. Via the ports section in the frontend, we define that container port 80 is exposed as port 8080 on the host IP. e.g. you can open your browser and type your host IP and port 8080 e.g. 127.0.0.1:8080 for localhost and reach your application.
With docker-compose installed, you can start your app, which consists of your frontend and database service via
docker-compose up
Available command options for docker-compose can be found here
https://docs.docker.com/compose/reference/overview/
Install instructions for docker-compose
https://docs.docker.com/compose/install/
Updated answer (10.11.2019, v2)
From the comment section
Keep in mind that you need to connect via the servicename (e.g. database) and the correct port. For MongoDB that port is 27017. That would tanslate to database:27017 in your frontend config
Q: will mongo also be available from the outside in this case?
A: No, since the service does not contain any port definition the database itself will not be directly reachable. From a security standpoint, this is preferable.
Q: could you expain this
volumes:
myMongoVolume: {}
A: in the service definition for your database service, we have specified a volume to store the database itself to make the data independent from the container lifecycle. However just by defining a volume in the service section the volume will not be created. Through the definition in the volume section, we create the volume myMongoVolume with the default settings (indicated through {}). If you would like to customize your volume you can do so in this volumes section of your docker-compose.yaml. More information regarding volumes can be found here
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
e.g. if you would like to use a specific storage driver for your volume or use an external storage.
I am looking for some assistance configuring BIND to host a DNS server on my web server.
I recently acquired a dedicated server running Ubuntu 14.04 LTS and I already have Nginx, PHP-FPM, MariaDB installed and working perfectly. My knowledge of postfix & dovecot are slim, so I followed this guide: A Mailserver on Ubuntu 14.04: Postfix, Dovecot, MySQL. The good news is that I've got mail coming in and going out as expected, but have come across another issue, which is some ISP and providers are denying the mail since there is no PTR records used.
So, I'm assuming I need to install and configure BIND to set up DNS and to set up a PTR record so that my mail will reach its destinations. I've tried Google with some tutorials but none of them seem clear for my purpose.
Installing a control panel, or one of those all-in-one scripts is out of the question since I already have the web server configured. Another issue is that some of them don't work with Nginx or use a different configuration of PHP. Plus, I want to learn how to do this on my own.
You don't have to install bind. Who ever has reverse DNS authority for your IP block will typically create a reverse name for you. Just request a reverse pointer record with the mail domain name for your IP.
How can we configure OpenStack to use and dynamically update remote Bind DNS Server.
This is not currently supported. There is a DNS driver layer, but the only driver at the moment is for LDAP backed PowerDNS. I have code for dynamic DNS updates (https://review.openstack.org/#/c/25194/), but have had trouble getting it landed because we need to fix eventlet monkey patching first.
So, its in progress, but you probably wont see it until Havana is released.
OpenStack relies on dnsmasq internally.
I am not aware of any way integrate an external bind server. Or plans to do that. Or even a reason to do that.
Check out Designate (https://docs.openstack.org/developer/designate/)
This could be what you are looking for:
Designate provides DNSaaS services for OpenStack:
- REST API for domain & record management
- Multi-tenant support
- Integrated with Keystone for authentication
- Framework in place to integrate with Nova and Neutron notifications (for auto-generated records)
- Support for PowerDNS and Bind9 out of the box
I have a two node BTS2010 group with a separate SQL Server hosting the BTS databases including SSODB; Biz01, Biz02 and Sql01. This environment was configured by a previous employee and I have no documentation available.
There seems to be something not right with the SSO config but I'm not sure how to resolve it.
When I run ssoconfig -status on Biz02 all looks good - it tells me that the SSO Server is Biz02 and the SQL Server is Sql01 plus a load of other stuff. However, when I run the same command on Biz01 I get the message: "Error 0xC0002A0F: Could not contact the SSO server 'Sql01'. Check that SSO is configured and that the SSO service is running on that server'
I'm not clear on what Biz01 is trying to do here - is it trying to reach the EntSSO windows service on Biz02 via an RPC call, before ultimately attempting to retrie config info from Sql01?
I have checked that the ENTSSO service is running on Biz01, Biz02 and that the RPC service is running on each of the three servers.
Can anyone help advise what further steps I can take to determine the root cause of this configuration problem?
Many thanks
Rob.
I'm not sure if you have your servers clustered or not but I've run into something similar before within a cluster. Your SSO name should be your network name and not the individual computers name. Here's an post about the issue I had. Hope it helps.