Cassandra another frame size issue - encryption

I know that this kind of issue with Apache Cassandra was a while ago for example:
Pentaho Frame size (17727647) larger than max length (16384000)! and
thrift_max_message_length_in_mb not recognized Cassandra
but my problem is a little bit different. I use Cassandra v2.0.7 and I want insert data from Pentaho Kettle (5.1) to Cassandra. I have to enable encryption (SSL), so I chaged cassandra.yaml
client_encryption_options:
enabled: true
keystore: /usr/cassandra/cassandra/conf/.keystore
keystore_password: password
require_client_auth: true
# Set trustore and truststore_password if require_client_auth is true
truststore: /usr/cassandra/cassandra/conf/.truststore
truststore_password: password
Now, there is a strange situation because, when I disable encryption (enabled: false) and insert data by Pentaho everything works well.But when I enable encryption I get this:
2015/01/05 17:42:38 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : A problem occurred during initializ
ation of the step
2015/01/05 17:42:38 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : org.apache.thrift.transport.TTransp
ortException: Frame size (352518400) larger than max length (16384000)!
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.cassandra.thrift.Cassandra$Client.recv_set_cql_version(Cassandra.java:1855)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.apache.cassandra.thrift.Cassandra$Client.set_cql_version(Cassandra.java:1842)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.CassandraConnection.checkOpen(CassandraConnection.java:159)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.CassandraConnection.setKeyspace(CassandraConnection.java:174)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.LegacyKeyspace.setKeyspace(LegacyKeyspace.java:100)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.CassandraConnection.getKeyspace(CassandraConnection.java:277)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.initialize(CassandraOutput.java:218)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.processRow(CassandraOutput.java:353)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2015/01/05 17:42:38 - Cassandra Output.0 - at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.processRow(CassandraOutput.java:356)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
at java.lang.Thread.run(Thread.java:745)
2015/01/05 17:42:38 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : Unexpected error
2015/01/05 17:42:38 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : java.lang.NullPointerException
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.processRow(CassandraOutput.java:356)
2015/01/05 17:42:38 - Cassandra Output.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2015/01/05 17:42:38 - Cassandra Output.0 - at java.lang.Thread.run(Thread.java:745)
child index = 1, logging object : org.pentaho.di.core.logging.LoggingObject#47406edf parent=277afd35-cec6-4972-a572-a68a58ff9ae7
2015/01/05 17:42:38 - t_product_rb - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : Errors detected!
2015/01/05 17:42:38 - t_product_rb - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : Errors detected!
I insert data with CQL3.
Now, I found that in cassandra.yaml there are two property :
-thrift_framed_transport_size_in_mb and
-thrift_max_message_length_in_mb <- but in my file I don't have this
I check a few configuration with this property. Once I added this thrift_max_message_length_in_mb , then I remove this etc. But everytime I got this error.
I notice that encryption add something BIG to my frame, but I don't know what and how.
Has somebody know how to fix this?
Some other info about this problem:
https://issues.apache.org/jira/browse/THRIFT-1324
https://issues.apache.org/jira/browse/THRIFT-1323
EDIT
I noticed again that when I change port from 9160 to 9042 (use CQL3) I get different error.
2015/01/09 12:53:21 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : A problem occurred during initializ
ation of the step
2015/01/09 12:53:21 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : org.apache.thrift.transport.TTransp
ortException: java.net.SocketTimeoutException: Read timed out
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.cassandra.thrift.Cassandra$Client.recv_set_cql_version(Cassandra.java:1855)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.cassandra.thrift.Cassandra$Client.set_cql_version(Cassandra.java:1842)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.CassandraConnection.checkOpen(CassandraConnection.java:159)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.CassandraConnection.setKeyspace(CassandraConnection.java:174)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.LegacyKeyspace.setKeyspace(LegacyKeyspace.java:100)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.cassandra.legacy.CassandraConnection.getKeyspace(CassandraConnection.java:277)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.initialize(CassandraOutput.java:218)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.processRow(CassandraOutput.java:353)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2015/01/09 12:53:21 - Cassandra Output.0 - at java.lang.Thread.run(Thread.java:745)
2015/01/09 12:53:21 - Cassandra Output.0 - Caused by: java.net.SocketTimeoutException: Read timed out
2015/01/09 12:53:21 - Cassandra Output.0 - at java.net.SocketInputStream.socketRead0(Native Method)
2015/01/09 12:53:21 - Cassandra Output.0 - at java.net.SocketInputStream.read(SocketInputStream.java:152)
2015/01/09 12:53:21 - Cassandra Output.0 - at java.net.SocketInputStream.read(SocketInputStream.java:122)
2015/01/09 12:53:21 - Cassandra Output.0 - at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
2015/01/09 12:53:21 - Cassandra Output.0 - at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
2015/01/09 12:53:21 - Cassandra Output.0 - at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
2015/01/09 12:53:21 - Cassandra Output.0 - ... 18 more
java.lang.NullPointerException
at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.processRow(CassandraOutput.java:356)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
at java.lang.Thread.run(Thread.java:745)
2015/01/09 12:53:21 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : Unexpected error
2015/01/09 12:53:21 - Cassandra Output.0 - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : java.lang.NullPointerException
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.di.trans.steps.cassandraoutput.CassandraOutput.processRow(CassandraOutput.java:356)
2015/01/09 12:53:21 - Cassandra Output.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2015/01/09 12:53:21 - Cassandra Output.0 - at java.lang.Thread.run(Thread.java:745)
child index = 4, logging object : org.pentaho.di.core.logging.LoggingObject#6a4ea42b parent=c8d41836-602c-4a17-8977-62d691c419c5
2015/01/09 12:53:21 - Cassandra Output.0 - Finished processing (I=0, O=0, R=1, W=0, U=0, E=1)
2015/01/09 12:53:21 - t_customer_cm - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : Errors detected!
2015/01/09 12:53:21 - t_customer_cm - Transformation detected one or more steps with errors.
2015/01/09 12:53:21 - t_customer_cm - Transformation is killing the other steps!
2015/01/09 12:53:21 - Dummy (do nothing).0 - Finished processing (I=0, O=0, R=10002, W=10002, U=0, E=0)
2015/01/09 12:53:21 - Get System Info.0 - Finished processing (I=0, O=0, R=20003, W=20003, U=0, E=0)
2015/01/09 12:53:21 - t_customer_cm - ERROR (version 5.1.0.0, build 1 from 2014-06-19_19-02-57 by buildguy) : Errors detected!
On cassandra I get this:
ERROR 12:53:11,807 Unexpected exception during request
org.jboss.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 00000028800100010000000f7365745f63716c5f76657273696f6e000000010b0001000000
5332e302e3100
at org.jboss.netty.handler.ssl.SslHandler.decode(SslHandler.java:871)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Any ideas?

Frame size (352518400) larger than max length (16384000)!
I notice that encryption add something BIG to my frame, but I don't know what and how. Has somebody know how to fix this?
With regard to the frame size it does not really matter what it is, only how big it is. As Bryan wrote in THRIFT-1324, the only difference in size between framed and "unframed" is a constant 4 bytes (int 32), preceding the data block and holding its size.
The theoretical solution seems very obvious to me: Find out, how big your biggest frame is and configure the settings accordingly. Each request and each response must fit into that.
Theoretical, because it seems strange that enabling encryption adds 21x the size to your data (you said it works without, so it must be <= 16384000). That sounds very unusual.

Check to make sure whether you are connecting with the CQL binary protocol (port 9042) using unsecured. The stack trace looks to indicate that Thrift is being used, but CQL3 uses a completely different interface. Is it possible that when using SSL it's using the thrift protocol (port 9160) and when using unsecured it's using the binary protocol (port 9042)?
EDIT: This answer may not be a good one. Your mention of using CQL3 got me thinking that thrift shouldn't even be involved, still might be worth taking a look to see if the unsecured version is using the native binary protocol or thrift.

Related

weblogic oracle jdbc SQLException: The database session time zone is not set

After years running our app without issues, this week we get java.sql.SQLException: The database session time zone is not set each time the app reads a row from the database that has a column of type TIMESTAMP WITH LOCAL TIME ZONE.
Server: WebLogic 10.3.3
JRE: jrockit-jdk1.6.0_26-R28.1.4-4.0.1
Oracle database : 11.2.0.4.0
Stack trace:
Caused by: java.sql.SQLException: The database session time zone is not set
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:271) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:445) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at oracle.jdbc.driver.TimestampltzAccessor.getTimestamp(TimestampltzAccessor.java:298) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at oracle.jdbc.driver.OracleResultSetImpl.getTimestamp(OracleResultSetImpl.java:1060) ~[ojdbc6.jar:Oracle JDBC Driver version - "11.1.0.7.0-Production"]
at weblogic.jdbc.wrapper.ResultSet_oracle_jdbc_driver_OracleResultSetImpl.getTimestamp(Unknown Source) ~[com.bea.core.utils.wrapper_1.4.0.0.jar:1.8.0.0]
at org.springframework.jdbc.support.JdbcUtils.getResultSetValue(JdbcUtils.java:183) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.BeanPropertyRowMapper.getColumnValue(BeanPropertyRowMapper.java:308) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.BeanPropertyRowMapper.mapRow(BeanPropertyRowMapper.java:246) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:92) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.processResultSet(JdbcTemplate.java:1144) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.extractOutputParameters(JdbcTemplate.java:1104) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate$5.doInCallableStatement(JdbcTemplate.java:1015) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate$5.doInCallableStatement(JdbcTemplate.java:1) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:953) ~[spring-jdbc-3.0.5.RELEASE.jar:3.0.5.RELEASE]
... 88 common frames omitted
I tried adding SQL ALTER SESSION SET TIME_ZONE='+01:00' as Init SQL in the WebLogic datasource definition, but it does not seem to help.
Help!

Failed upgraded from 3.41.0 to 3.41.1 (3.42.0)

The nexus is located in kubernetis, before that it was updated without problems. At the moment I get the following error, no matter which version I upgrade to 3.41.1/3.42.0.
Log output from version 3.41.1:
-------------------------------------------------
Started Sonatype Nexus OSS 3.41.1-01
-------------------------------------------------
2022-10-19 11:34:30,682+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#31451f5e{HTTP/1.1, (http/1.1)}{0.0.0.0:8086}
2022-10-19 11:34:30,686+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#2105be54{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2022-10-19 11:34:30,689+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#5f41ab0d{HTTP/1.1, (http/1.1)}{0.0.0.0:8085}
2022-10-19 11:34:31,834+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:34:31,835+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:34:48,228+0000 INFO [qtp491323871-560] *UNKNOWN org.apache.shiro.session.mgt.AbstractValidatingSessionManager - Enabling session validation scheduler...
2022-10-19 11:34:48,238+0000 INFO [qtp491323871-556] *UNKNOWN org.sonatype.nexus.internal.security.anonymous.AnonymousManagerImpl - Loaded configuration: OrientAnonymousConfiguration{enabled=true, userId='anonymous', realmName='NexusAuthorizingRealm'}
2022-10-19 11:35:01,488+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:35:01,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:35:31,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:35:31,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:36:01,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:36:01,490+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:36:18,170+0000 WARN [SIGTERM handler] *SYSTEM com.orientechnologies.orient.core.OSignalHandler - Received signal: SIGTERM
I get a similar error when updating to 3.42.0.
What could be the problem?

Unable to init microstack (snap version openstack) due to nginx complaining. any suggestions?

Hi community out there.
Am trying to install openstack on an ubuntu 20.04 server but init fails by nginx complaining
user01#metropolis:/var/opt$ sudo microstack.init --auto
[sudo] Passwort für user01:
2020-08-27 12:07:41,204 - microstack_init - INFO - Configuring networking ...
2020-08-27 12:07:53,190 - microstack_init - INFO - Opening horizon dashboard up to *
2020-08-27 12:07:56,342 - microstack_init - INFO - Waiting for RabbitMQ to start ...
Waiting for 10.20.20.1:5672
2020-08-27 12:08:46,544 - microstack_init - INFO - RabbitMQ started!
2020-08-27 12:08:46,544 - microstack_init - INFO - Configuring RabbitMQ ...
2020-08-27 12:08:50,572 - microstack_init - INFO - RabbitMQ Configured!
2020-08-27 12:08:50,629 - microstack_init - INFO - Waiting for MySQL server to start ...
Waiting for 10.20.20.1:3306
2020-08-27 12:08:50,643 - microstack_init - INFO - Mysql server started! Creating databases ...
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'neutron'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1287, "Using GRANT statement to modify existing user's properties other than privileges is deprecated and will be removed in future release. Use ALTER USER statement for this operation.")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'nova'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'nova_api'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'nova_cell0'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'cinder'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'glance'; database exists")
result = self._query(query)
/snap/microstack/206/lib/python3.6/site-packages/pymysql/cursors.py:170: Warning: (1007, "Can't create database 'keystone'; database exists")
result = self._query(query)
Traceback (most recent call last):
File "/snap/microstack/206/bin/microstack_init", line 33, in <module>
sys.exit(load_entry_point('microstack-init==0.0.1', 'console_scripts', 'microstack_init')())
File "/snap/microstack/206/lib/python3.6/site-packages/init/main.py", line 54, in wrapper
return func(*args, **kwargs)
File "/snap/microstack/206/lib/python3.6/site-packages/init/main.py", line 138, in init
question.ask()
File "/snap/microstack/206/lib/python3.6/site-packages/init/questions/question.py", line 210, in ask
self.yes(awr)
File "/snap/microstack/206/lib/python3.6/site-packages/init/questions/__init__.py", line 358, in yes
check('snapctl', 'start', 'microstack.nginx')
File "/snap/microstack/206/lib/python3.6/site-packages/init/shell.py", line 68, in check
raise subprocess.CalledProcessError(proc.returncode, " ".join(args))
subprocess.CalledProcessError: Command 'snapctl start microstack.nginx' returned non-zero exit status 1.
user01#metropolis:/var/opt$ netstat | grep :80
tcp 0 0 metropolis:55384 192.168.178.75:8009 VERBUNDEN
tcp 0 0 metropolis:48726 192.168.178.129:8009 VERBUNDEN
tcp 0 0 metropolis:59164 192.168.178.101:8009 VERBUNDEN
tcp 0 0 metropolis:50820 172.30.33.5:8086 VERBUNDEN
user01#metropolis:/var/opt$ netstat | grep :443
tcp 0 0 metropolis:44324 192.168.178.12:http TIME_WAIT
tcp 0 0 metropolis:44376 192.168.178.12:http TIME_WAIT
user01#metropolis:/var/opt$
Can someone please explain, why nginx complains or where to review logs? Am new to snap.
Any other suggestions also welcome? No Apache or Nginx running
I solved the same problem but with microstack.ovs.vswitchd
To solve the problem I had to install openvswitch-switch-dpdk.
Your solution may be to install nginx or reinstall it..
I encountered the same problem, I tried to check if apache2 was running on the same port but in my ubuntu-ec2 instance, I haven't installed apache2 so technically I shouldn't have this error.
So, I look for if any other service was running using the network tools
Then I found out that the datadog-agent service was using the same port, I removed the service and then tried to initialize microstack using microstack init --auto --control and it worked. After the microstack was up and running, I then installed a datadog-agent to monitor my ec2 instance.
If you still have this error, look for ports using the network tools and try to remove those services and re-install after you initialize the microstack.
My Ec2 Details:
Ubuntu server 20.04 LTS
RAM: 8GB
Storage: 20Gb
VCPU: 2
t2.large

Connect sparklyr 0.8.4 to remote spark 2.2.1 connection

I'm trying to connect from R to a remote spark cluster.
The spark cluster is build on debian jessie and the R version i can install on it is at most 3.3 but I need 3.4 to be able to run FactoMineR. So I installed R on another machine and try to connect the cluster using sparklyr 0.8.4
> sc <- spark_connect(master = "spark://spark-cluster-m:7077", spark_home="/usr/lib/spark/", version="2.2.1")
Error in start_shell(master = master, spark_home = spark_home, spark_version = version, :
SPARK_HOME directory '/usr/lib/spark/' not found
spark isn't installed on the local machine but on the spark-cluster-m, it is :
jc#spark-cluster-m:/usr/lib/spark$ ls
bin conf data examples external jars LICENSE licenses NOTICE python R README.md RELEASE sbin work yarn
Have I missed something ?
The spark cluster is on google cloud (test account) and so is the VM with R. How do I verify the port spark can be connected to ?
Thanks for your clues
#user16... You're right, this particular problem seems to be solved but my way is not ended.
I installed the same spark version (2.2.1 with hadoop > 2.7)
Here is my new error message :
Error in force(code) :
Failed during initialize_connection: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
at org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:137)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
at sparklyr.StreamHandler.read(stream.scala:66)
at sparklyr.BackendHandler.channelRead0(handler.scala:51)
at sparklyr.BackendHandler.channelRead0(handler.scala:4)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Log: /tmp/RtmpTUh0z6/file5d231368db0_spark.log
---- Output Log ----
at io.netty.channel.nio.NioEventLoop.processS
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:24:59 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-cluster-m:7077...
18/07/21 18:24:59 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:108)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1$$anon$1.run(StandaloneAppClient.scala:106)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to connect to spark-cluster-m/10.142.0.3:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:232)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
... 4 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: spark-cluster-m/10.142.0.3:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:631)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
18/07/21 18:25:19 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
18/07/21 18:25:19 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
18/07/21 18:25:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46811.
18/07/21 18:25:19 INFO NettyBlockTransferService: Server created on 10.142.0.5:46811
18/07/21 18:25:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/07/21 18:25:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMasterEndpoint: Registering block manager 10.142.0.5:46811 with 366.3 MB RAM, BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.142.0.5, 46811, None)
18/07/21 18:25:19 INFO SparkUI: Stopped Spark web UI at http://10.142.0.5:4040
18/07/21 18:25:19 INFO StandaloneSchedulerBackend: Shutting down all executors
18/07/21 18:25:19 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
18/07/21 18:25:19 WARN StandaloneAppClient$ClientEndpoint: Drop Unregist
I can see it can resolve the name (=> 10.142.0.3)
Also, it seems to be the good port as if I use port 7000, i have the error :
18/07/21 18:32:54 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from spark-cluster-m/10.142.0.3:7000 is closed
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Could not connect to spark-cluster-m:7000: java.io.IOException: Connection reset by peer
18/07/21 18:32:54 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-cluster-m:7000
But I can't figure out what this means.
You say my configuration is "particular". If there is a better (and simple) approach, i would be glad to use it.
Here is how I proceeded in my tests :
I created a google dataproc cluster with spark (2.2.1)
I added Cassandra on each node
At this stage, everything works ok.
Then, i need to install FactoMineR as I'd like to try HMFA. It is said to run with R > 3.0.0 so it seems to be ok but it depends on nlme which can't be installed on R < 3.4.0 (and the one in the debian jessie backports is 3.3.)
So, what can I do ?
I must admit that i'm not very enthusiastic in restarting a full spark / cassandra cluster install from scratch...

Grunt task can't locate phantomjs driver

I am using a grunt task that calls the 'webdriver-manager' node module to start my selenium webdriver. Right now it is configured to launch chromedriver, and gives me this line whenever I launch the grunt task: 'start_webdriver':
[10:49:13] I/start - java -Dwebdriver.chrome.driver=/Users/talain/development/gitClone/enterprise/Source/clients-root/clients-webui-root/clients-webui-interface/node_modules/webdriver-manager/selenium/chromedriver_2.29
The phantomjs driver is located in the same directory as the chromedriver, but I don't know where the configuration is that would allow me to change it. Here is the full output of launching 'start_webdriver' grunt task:
/usr/local/bin/node /Users/talain/development/gitClone/enterprise/Source/clients-root/clients-webui-root/clients-webui-interface/node_modules/grunt-cli/bin/grunt --gruntfile /Users/talain/development/gitClone/enterprise/Source/clients-root/clients-webui-root/clients-webui-interface/Gruntfile.js "testing:start webdriver"
Running "execute:start_webdriver" (execute) task
-> executing /Users/talain/development/gitClone/enterprise/Source/clients-root/clients-webui-root/clients-webui-interface/node_modules/webdriver-manager
[10:49:13] I/start - java -Dwebdriver.chrome.driver=/Users/talain/development/gitClone/enterprise/Source/clients-root/clients-webui-root/clients-webui-interface/node_modules/webdriver-manager/selenium/chromedriver_2.29 -Dwebdriver.gecko.driver=/Users/talain/development/gitClone/enterprise/Source/clients-root/clients-webui-root/clients-webui-interface/node_modules/webdriver-manager/selenium/geckodriver-v0.15.0 -jar /Users/talain/development/gitClone/enterprise/Source/clients-root/clients-webui-root/clients-webui-interface/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.3.1.jar -port 4444
[10:49:13] I/start - seleniumProcess.pid: 8081
10:49:13.978 INFO - Selenium build info: version: '3.3.1', revision: '5234b32'
10:49:13.979 INFO - Launching a standalone Selenium Server
2017-04-13 10:49:14.002:INFO::main: Logging initialized #277ms to org.seleniumhq.jetty9.util.log.StdErrLog
10:49:14.062 INFO - Driver provider org.openqa.selenium.ie.InternetExplorerDriver registration is skipped:
registration capabilities Capabilities [{ensureCleanSession=true, browserName=internet explorer, version=, platform=WINDOWS}] does not match the current platform MAC
10:49:14.062 INFO - Driver provider org.openqa.selenium.edge.EdgeDriver registration is skipped:
registration capabilities Capabilities [{browserName=MicrosoftEdge, version=, platform=WINDOWS}] does not match the current platform MAC
10:49:14.062 INFO - Driver class not found: com.opera.core.systems.OperaDriver
10:49:14.062 INFO - Driver provider com.opera.core.systems.OperaDriver registration is skipped:
Unable to create new instances on this machine.
10:49:14.063 INFO - Driver class not found: com.opera.core.systems.OperaDriver
10:49:14.063 INFO - Driver provider com.opera.core.systems.OperaDriver is not registered
2017-04-13 10:49:14.106:INFO:osjs.Server:main: jetty-9.2.20.v20161216
2017-04-13 10:49:14.141:INFO:osjsh.ContextHandler:main: Started o.s.j.s.ServletContextHandler#685cb137{/,null,AVAILABLE}
2017-04-13 10:49:14.174:INFO:osjs.AbstractConnector:main: Started ServerConnector#5bcab519{HTTP/1.1,[http/1.1]}{0.0.0.0:4444}
2017-04-13 10:49:14.175:INFO:osjs.Server:main: Started #450ms
10:49:14.175 INFO - Selenium Server is up and running
webdriver-manager start does not have a way to launch with phantomjs. I would suggest launching it manually. You could launch it with:
java -Dphantomjs.binary.path=/path/to/phantomjs -jar /path/to/selenium-server-standalone.jar -port 4444
Why is there no phantomjs support in webdriver-manager? Protractor does not recommend or support phantomjs. See browser support.

Resources