I'm trying to install IBM BPM 8.5.6 in a linux environment with Oracle database.
Steps I followed to install was
Installed the IBM Installation
Manager using BPM PFS
Installed WAS
and BPM Process Center using The
installation manager.
Created 3 oracle schema for shred db, process
server and performance server
Configured the installation using
sample single cluster process center
file provided by IBM. : using
BPMConfig –create option
The installation was successful and I could see all tables being created. Then I started started it using BPMConfig –start option. That too completed successfully.
I didn't change any ports so it should be using all default ports. Afterwards when I try to access the console like http://servername:9080/ProcessAdmin or http://servername:9080/ProcessCenter or anything i'm getting a 404 error message Error 404: com.ibm.ws.webcontainer.servlet.exception.NoTargetForURIException: No target servlet configured for uri: /ProcessAdmin
Do I have to do anything else? Or what is the starting point or default url to get to process portal or admin console. The WAS admin console is working fine.
Any help is appreciated. Thanks.
Since you probably used custom installation, you have to properly initialize data calling following command:
bootstrapProcessServerData.bat -clusterName cluster_name
Related
What am I trying to achieve?
1.Trying to build a fault-tolerant WordPress website.
2.Tried installing the webserver on one AZ with Muti-AZ RDS deployment.It was quite successful.
Set up is as follows
AZ-1 Public subnet - Launched one ec-2 instance, Installed httpd, PHP, PHP-MySQL, WORDPRESS.
AZ-1 Private subnet - Launched a Multi-AZ RDS instance
Problem Encountered:
When I Wanted to expand to another availability zone for fault tolerance.
Launched another ec-2 instance in different availability zone [AZ-2] and installed httpd, PHP, PHP-MySQL, WORDPRESS
I DID NOT launch an another RDS.I wanted to connect to the RDS in [AZ-1]coz its already a Multi-AZ, So wanted to have the fault tolerance set up only for the Web server. I was able to install WordPress on AZ-2 public subnet, but I was unable to connect to the
RDS[MYSQL]endpoint in AZ-1.
Getting the error message.
"Already installed.You appear to have already installed WordPress. To reinstall please clear your old database tables first".
"Already installed.You appear to have already installed WordPress. To
reinstall please clear your old database tables first".
This means your second web server can successfully connect to the RDS instance. Instead of trying to "install" WordPress, just copy all your WordPress files from the first web server and you'll be fine.
Is it possible to add Presto interpreter to Zeppelin on AWS EMR 4.3 and if so, could someone please post the instructions? I have Presto-Sandbox and Zeppelin-Sandbox running on EMR.
There's no official Presto interpreter for Zeppelin, and the conclusion of the Jira ticket raised is that it's not necessary because you can just use the jdbc interpreter
https://issues.apache.org/jira/browse/ZEPPELIN-27
I'm running a later EMR with presto & zeppelin, and the default set of interpreters doesn't include jdbc, but it can be installed using a ssh to the master node and running
sudo /usr/lib/zeppelin/bin/install-interpreter.sh --name jdbc
Even better is to use that as a bootstrap script.
Then you can add a new interpreter in Zeppelin.
Click the login-name drop down in the top right of Zeppelin
Click Interpreter
Click +Create
Give it a name like presto, meaning you need to use %presto as a directive on the first line of a paragraph in zeppelin, or set it as the default interpreter.
The settings you need here are:
default.driver com.facebook.presto.jdbc.PrestoDriver
default.url jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889
default.user hadoop
Note there's no password provided because the EMR environment should be using IAM roles, and ppk keys etc for authentication.
You will also need a Dependency for the presto JDBC driver jar. There's multiple ways to add dependencies in Zeppelin, but one easy way is via a maven groupid:artifactid:version reference in the interpreter settings under Dependencies
e.g.
under artifact
com.facebook.presto:presto-jdbc:0.170
Note the version 0.170 corresponds to the version of Presto currently deployed on EMR, which will change in the future. You can see in the AWS EMR settings which version is being deployed to your cluster.
You can also get Zeppelin to connect directly to a catalog, or a catalog & schema by appending them to the default.url setting
As per the Presto docs for the JDBC driver
https://prestodb.io/docs/current/installation/jdbc.html
e.g. As an example, using Presto with a hive metastore with a database called datakeep
jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889/hive
OR
jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889/hive/datakeep
UPDATE Feb 2018
EMR 5.11.1 is using presto 0.187 and there is an issue in the way Zeppelin interpreter provides properties to the Presto Driver, causing an error something like Unrecognized connection property 'url'
Currently the only solutions appear to be using an older version in the artifact, or manually uploading a patched presto driver
See https://github.com/prestodb/presto/issues/9254 and https://issues.apache.org/jira/browse/ZEPPELIN-2891
In my case using an old reference to a driver (apparently must be older than 0.180) e.g. com.facebook.presto:presto-jdbc:0.179 did not work, and zeppelin gave me an error about can't download dependencies. Funny error but probably something to do with Zeppelin's local maven repo not containing this, not sure I gave up on that.
I can confirm that patching the driver works.
(Assuming you have java & maven installed)
Clone the presto github repo
Checkout the release tag e.g. git checkout 0.187
Make the edits as per that patch https://groups.google.com/group/presto-users/attach/1231343dbdd09/presto-jdbc.diff?part=0.1&authuser=0
Build the jar using mvn clean package
Copy the jar to the zeppelin machine somewhere zeppelin user has permission to read.
In the interpreter, under the Dependencies - Artifacts section, instead of a maven reference use the absolute path to that jar file.
There appears to be an issue passing the user to the presto driver, so just add it to the "default.url" jdbc connection string as a url parameter, e.g.
jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889?user=hadoop
Up and running. Meanwhile, it might be worth considering Athena as an alternative to Presto give it's serverless & is effectively just a fork of Presto. It does have limitation to External hive tables only, and they must be created in Athena's own catalog (or now in AWS Glue catalog, also restricted to External tables).
Chris Kang has a good post on doing that in spark-shell, http://theckang.com/2016/spark-with-presto/. I don't see you wouldn't be able to do that in Zeppelin. Another helpful post is making sure you have the right Java version in EMR, http://queirozf.com/entries/update-java-to-jdk-8-on-amazon-elastic-mapreduce. The current Presto version as of writing only runs on Java 8. I hope it sets you in the right direction.
How can I install Elastic Kibana (which is just a batch file) as a windows service?
It probably needs to depend on the ElasticSearch process as well (this assumes I'm running it on the same server)
The following command will create the service with a name of "ElasticSearch Kibana 4.0.1" and make it depend on ElasticSearch so it doesn't try to start too soon.
sc create "ElasticSearch Kibana 4.0.1" binPath= "{path to batch file}" depend= "elasticsearch-service-x64"
The kibana.bat file delivered with Kibana 4.6.1 was not suited to use with sc create directly for me (Service start failed).
I used nssm like this
nssm install kibana461
UI: choose kibana.bat as Application Path
UI: select a log file to write to on "I/O" tab for stdout and stderr
UI: on the "Dependencies" tab enter elasticsearch241 (or whatever you called it)
UI: "Install Service"
sc start kibana461
Rather than creating a dependency, I made a delayed start.
First use the sc command (from jhilden).
sc create "Elasticsearch Kibana 4.4.2" binPath= "C:\kibana-4.4.2-windows\bin\kibana.bat"
Open services.msc and find your new service.
Right click the service and select Properties.
Change to Automatic (Delayed Start).
If you haven't already, change Elasticsearch to Automatic.
This will ensure elasticsearch will start when the machine starts, and kibana will start sometime soon after (approx 2 minutes from this question).
I found this video very helpful.
Use NSSM (Non-sucking Service Manager) to install Kibana as a Service.
https://www.youtube.com/watch?v=L-0A2cqTn-w
I have setup inmemory repository on Tomcat7 and trying to access it using cmis workbench. As per instructions, I am using
http://localhost:8080/inmemory/atom
as URL for binding & leaving all other things default but I am getting "Service Unavailable" error as below with warning in log4j.log file under tomcat/bin.
WARN [localhost-startStop-1]
org.apache.chemistry.opencmis.inmemory.server.InMemoryServiceFactoryImpl:
Resource file with type definitions types.xml could not be found, no
types will be created.
Next I tried to connect to my local Alfresco 4.2 repo using same workbench with atom URL as
http://localhost:8080/alfresco/cmisatom
with admin credentials but again I am getting same error as below.
Try the following:
go to http://www.alfresco.com/cmis
There is a link to the workbench: http://cmis.alfresco.com/opencmis/workbench.jnlp
Use the alfresco cmisatom url: http://cmis.alfresco.com/cmisatom
Login as admin with password admin.
This one works and you can even try your local repo.
Open these URLs in a web browser. If you see a "Service Unavailable" error then it's not a CMIS Workbench problem.
You can also try a nightly build of the CMIS Workbench [1], which provides more detailed error messages.
[1] https://builds.apache.org/job/Chemistry%20-%20OpenCMIS%20-%20Workbench/ws/chemistry-opencmis-workbench/target/
Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.