Invoke Error updating Business Central to v20 - dynamics-business-central

Trying to update to BC20 and I am getting an error at the Invoke-NAVApplicationDatabaseConversion command saying:
Invoke-NAVApplicationDatabaseConversion : A technical upgrade of database 'Demo Database BC (19-0)' on server
'LOCALHOST\BCDEMO' cannot be run, because the database�s application version '131112' is greater than or equal to the
platform version '131112'.
Does anyone know a fix for this or what I need to do?
Here is my whole script:
$OldBcServerInstance = "BC190"
Import-Module "C:\Program Files\Microsoft Dynamics 365 Business Central\190\Service\NavAdminTool.ps1"
Restart-NAVServerInstance -ServerInstance $OldBcServerInstance
$OldBcServerInstance = "BC190"
$NewBcServerInstance = "BC200"
$ApplicationDatabase = "Demo Database BC (19-0)"
$DatabaseServer = "localhost\BCDEMO"
$SystemAppPath = "C:\BBS\cu2\Dynamics.365.BC.38230.US.DVD (1)\Applications\system application\source\Microsoft_System Application.app"
$BaseAppPath = "C:\BBS\cu2\Dynamics.365.BC.38230.US.DVD (1)\Applications\BaseApp\Source\Microsoft_Base Application.app"
$ApplicationAppPath = "C:\BBS\cu2\Dynamics.365.BC.38230.US.DVD (1)\Applications\Application\Source\Microsoft_Application.app"
$NewBCVersion = "20.0.37253.38230"
$PartnerLicense = "C:\BBS\5317559_BC20\5317559.flf"
$CustomerLicense = "The file path and name of the customer license"
$AddinsFolder = "C:\Program Files\Microsoft Dynamics 365 Business Central\190\Service\Add-ins\"
Invoke-NAVApplicationDatabaseConversion -DatabaseServer $DatabaseServer -DatabaseName $ApplicationDatabase
Set-NAVServerConfiguration -ServerInstance $NewBcServerInstance -KeyName DatabaseName -KeyValue $ApplicationDatabase
Set-NavServerConfiguration -ServerInstance $NewBcServerInstance -KeyName "EnableTaskScheduler" -KeyValue false
Restart-NAVServerInstance -ServerInstance $NewBcServerInstance
Import-NAVServerLicense -ServerInstance $NewBcServerInstance -LicenseFile $PartnerLicense
Restart-NAVServerInstance -ServerInstance $NewBcServerInstance
Publish-NAVApp -ServerInstance $NewBcServerInstance -Path $SystemAppPath
Publish-NAVApp -ServerInstance $NewBcServerInstance -Path $BaseAppPath
Publish-NAVApp -ServerInstance $NewBcServerInstance -Path $ApplicationAppPath
Publish-NAVApp -ServerInstance $NewBcServerInstance -Path "C:\projects\LoanVisionLite\Bestborn Business Solutions, LLC_LoanVisionLite_1.0.0.0.app" -SkipVerification
Get-NAVAppInfo -ServerInstance $NewBcServerInstance | Where-Object {$_.Publisher -notlike 'Microsoft'} | Repair-NAVApp
Restart-NAVServerInstance -ServerInstance $NewBcServerInstance
Start-NAVAppDataUpgrade -ServerInstance $NewBcServerInstance -Name "System Application" -Version $NewBCVersion
Start-NAVAppDataUpgrade -ServerInstance $NewBcServerInstance -Name "Base Application" -Version $NewBCVersion
Start-NAVAppDataUpgrade -ServerInstance $NewBcServerInstance -Name "Application" -Version $NewBCVersion
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.BusinessChart' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'BusinessChart\Microsoft.Dynamics.Nav.Client.BusinessChart.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.FlowIntegration' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'FlowIntegration\Microsoft.Dynamics.Nav.Client.FlowIntegration.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.OAuthIntegration' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'OAuthIntegration\Microsoft.Dynamics.Nav.Client.OAuthIntegration.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.PageReady' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'PageReady\Microsoft.Dynamics.Nav.Client.PageReady.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.PowerBIManagement' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'PowerBIManagement\Microsoft.Dynamics.Nav.Client.PowerBIManagement.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.RoleCenterSelector' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'RoleCenterSelector\Microsoft.Dynamics.Nav.Client.RoleCenterSelector.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.SatisfactionSurvey' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'SatisfactionSurvey\Microsoft.Dynamics.Nav.Client.SatisfactionSurvey.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.SocialListening' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'SocialListening\Microsoft.Dynamics.Nav.Client.SocialListening.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.VideoPlayer' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'VideoPlayer\Microsoft.Dynamics.Nav.Client.VideoPlayer.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.WebPageViewer' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'WebPageViewer\Microsoft.Dynamics.Nav.Client.WebPageViewer.zip')
Set-NAVAddIn -ServerInstance $NewBcServerInstance -AddinName 'Microsoft.Dynamics.Nav.Client.WelcomeWizard' -PublicKeyToken 31bf3856ad364e35 -ResourceFile ($AppName = Join-Path $AddinsFolder 'WelcomeWizard\Microsoft.Dynamics.Nav.Client.WelcomeWizard.zip')
Set-NAVApplication -ServerInstance $NewBcServerInstance -ApplicationVersion $NewBCVersion -Force
Start-NAVDataUpgrade -ServerInstance $NewBcServerInstance -FunctionExecutionMode Serial
Set-NAVServerConfiguration -ServerInstance $NewBcServerInstance -KeyName SolutionVersionExtension -KeyValue "437dbf0e-84ff-417a-965d-ed2bb9650972" -ApplyTo All
Restart-NAVServerInstance -ServerInstance $NewBcServerInstance

I believe you are getting the error because you are using the version 19 PowerShell Modules:
Import-Module "C:\Program Files\Microsoft Dynamics 365 Business Central\190\Service\NavAdminTool.ps1"
In order to do the technical upgrade from one version to another you must use the PowerShell Modules of the new version. Depending on your where you installed the new version it could be e.g.:
Import-Module "C:\Program Files\Microsoft Dynamics 365 Business Central\200\Service\NavAdminTool.ps1"
Note that the folder 190 has been changed to 200.

Related

OpenLDAP Invalid credentials for readonly user

I try to follow this guide https://www.talkingquickly.co.uk/gitea-sso-with-keycloak-openldap-openid-connect to create SSO solution with OpenLDAP and Keycloak. I'm trying to add the readonly user. It should be the same LDIFs as here https://github.com/osixia/docker-openldap/tree/master/image/service/slapd/assets/config/bootstrap/ldif/readonly-user
I apply those LDIFs for the readonly user but I get
$ ldapsearch -x -H ldap://localhost:1389 -b "dc=muellerpublic,dc=de" -D "cn=readonly,dc=muellerpublic,dc=de" "+" -w xxx
Handling connection for 1389
ldap_bind: Invalid credentials (49)
Here are the users/groups:
$ ldapsearch -x -H ldap://localhost:1389 -b "dc=muellerpublic,dc=de" -D "cn=admin,dc=muellerpublic,dc=de" "+" -w xxx
Handling connection for 1389
# extended LDIF
#
# LDAPv3
# base <dc=muellerpublic,dc=de> with scope subtree
# filter: (objectclass=*)
# requesting: +
#
# muellerpublic.de
dn: dc=muellerpublic,dc=de
structuralObjectClass: organization
entryUUID: ce600638-0d8f-103c-8fb1-1558d46de393
creatorsName: cn=admin,dc=muellerpublic,dc=de
createTimestamp: 20220119162257Z
entryCSN: 20220119162257.152328Z#000000#000#000000
modifiersName: cn=admin,dc=muellerpublic,dc=de
modifyTimestamp: 20220119162257Z
entryDN: dc=muellerpublic,dc=de
subschemaSubentry: cn=Subschema
hasSubordinates: TRUE
# users, muellerpublic.de
dn: ou=users,dc=muellerpublic,dc=de
structuralObjectClass: organizationalUnit
entryUUID: ce601dc6-0d8f-103c-8fb2-1558d46de393
creatorsName: cn=admin,dc=muellerpublic,dc=de
createTimestamp: 20220119162257Z
entryCSN: 20220119162257.152933Z#000000#000#000000
modifiersName: cn=admin,dc=muellerpublic,dc=de
modifyTimestamp: 20220119162257Z
entryDN: ou=users,dc=muellerpublic,dc=de
subschemaSubentry: cn=Subschema
hasSubordinates: FALSE
# readonly, muellerpublic.de
dn: cn=readonly,dc=muellerpublic,dc=de
structuralObjectClass: organizationalRole
entryUUID: ce60b6a0-0d8f-103c-8fb3-1558d46de393
creatorsName: cn=admin,dc=muellerpublic,dc=de
createTimestamp: 20220119162257Z
entryCSN: 20220119162257.156845Z#000000#000#000000
modifiersName: cn=admin,dc=muellerpublic,dc=de
modifyTimestamp: 20220119162257Z
entryDN: cn=readonly,dc=muellerpublic,dc=de
subschemaSubentry: cn=Subschema
hasSubordinates: FALSE
Here are the LDIFs created:
20-readonly-user.ldif: |
# Paths
dn: cn=readonly,dc=muellerpublic,dc=de
changetype: add
cn: readonly
objectClass: simpleSecurityObject
objectClass: organizationalRole
userPassword: {SSHA}5Y0mPhzRCYDBRltdvF6hp+m0DWgPTdjD
description: LDAP read only user
21-readonly-user-acl.config.ldif: |
dn: olcDatabase={2}mdb,cn=config
changetype: modify
replace: olcAccess
olcAccess: to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth manage by * break
olcAccess: to attrs=userPassword,shadowLastChange by self write by dn="cn=admin,dc=muellerpublic,dc=de" write by anonymous auth by * none
olcAccess: to * by self read by dn="cn=admin,dc=muellerpublic,dc=de" write by dn="cn=readonly,dc=muellerpublic,dc=de" read by * none

Gremlin : AbstractGryoMessageSerializerV3d0. org.apache.tinkerpop.shaded.kryo.KravyoException: Encountered unregistered class ID: 65536

I am using gremlin to fire query on JanusGraph using Java. My Java code is firing g.addV(...) query using Cluster client.submit(query).
cluster = Cluster.build()
.addContactPoint("localhost")
.port(8182)
.serializer(Serializers.GRAPHSON_V3D0)
.maxInProcessPerConnection(32)
.maxSimultaneousUsagePerConnection(32)
.maxContentLength(1000000)
.maxWaitForConnection(10)
.minConnectionPoolSize(5)
.maxConnectionPoolSize(20)
.create();
Getting following Error:
21:23:05.890 WARN - Response [PooledUnsafeDirectByteBuf(ridx: 393, widx: 393, cap: 393)] could not be deserialized by org.apache.tinkerpop.gremlin.driver.ser.AbstractGryoMessageSerializerV3d0.
org.apache.tinkerpop.shaded.kryo.KryoException: Encountered unregistered class ID: 65536
at org.apache.tinkerpop.gremlin.structure.io.gryo.AbstractGryoClassResolver.readClass(AbstractGryoClassResolver.java:148)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClass(Kryo.java:670)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClassAndObject(Kryo.java:781)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:39)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:24)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoSerializersV3d0$VertexSerializer.read(GryoSerializersV3d0.java:164)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoSerializersV3d0$VertexSerializer.read(GryoSerializersV3d0.java:132)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedSerializerAdapter.read(ShadedSerializerAdapter.java:52)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.tinkerpop.shaded.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:134)
at org.apache.tinkerpop.shaded.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:40)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:39)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:24)
at org.apache.tinkerpop.gremlin.driver.ser.ResponseMessageGryoSerializer.read(ResponseMessageGryoSerializer.java:56)
at org.apache.tinkerpop.gremlin.driver.ser.ResponseMessageGryoSerializer.read(ResponseMessageGryoSerializer.java:34)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedSerializerAdapter.read(ShadedSerializerAdapter.java:52)
at org.apache.tinkerpop.shaded.kryo.Kryo.readObject(Kryo.java:686)
at org.apache.tinkerpop.gremlin.driver.ser.AbstractGryoMessageSerializerV3d0.deserializeResponse(AbstractGryoMessageSerializerV3d0.java:157)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:47)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:35)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
at io.netty.handler.codec.http.websocketx.Utf8FrameValidator.channelRead(Utf8FrameValidator.java:82)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:834)
21:23:05.894 ERROR - Could not process the response
io.netty.handler.codec.DecoderException: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: org.apache.tinkerpop.shaded.kryo.KryoException: Encountered unregistered class ID: 65536
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:98)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
at io.netty.handler.codec.http.websocketx.Utf8FrameValidator.channelRead(Utf8FrameValidator.java:82)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: org.apache.tinkerpop.shaded.kryo.KryoException: Encountered unregistered class ID: 65536
at org.apache.tinkerpop.gremlin.driver.ser.AbstractGryoMessageSerializerV3d0.deserializeResponse(AbstractGryoMessageSerializerV3d0.java:161)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:47)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:35)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
... 33 common frames omitted
Caused by: org.apache.tinkerpop.shaded.kryo.KryoException: Encountered unregistered class ID: 65536
at org.apache.tinkerpop.gremlin.structure.io.gryo.AbstractGryoClassResolver.readClass(AbstractGryoClassResolver.java:148)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClass(Kryo.java:670)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClassAndObject(Kryo.java:781)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:39)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:24)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoSerializersV3d0$VertexSerializer.read(GryoSerializersV3d0.java:164)
at org.apache.tinkerpop.gremlin.structure.io.gryo.GryoSerializersV3d0$VertexSerializer.read(GryoSerializersV3d0.java:132)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedSerializerAdapter.read(ShadedSerializerAdapter.java:52)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.tinkerpop.shaded.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:134)
at org.apache.tinkerpop.shaded.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:40)
at org.apache.tinkerpop.shaded.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:39)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedKryoAdapter.readClassAndObject(ShadedKryoAdapter.java:24)
at org.apache.tinkerpop.gremlin.driver.ser.ResponseMessageGryoSerializer.read(ResponseMessageGryoSerializer.java:56)
at org.apache.tinkerpop.gremlin.driver.ser.ResponseMessageGryoSerializer.read(ResponseMessageGryoSerializer.java:34)
at org.apache.tinkerpop.gremlin.structure.io.gryo.kryoshim.shaded.ShadedSerializerAdapter.read(ShadedSerializerAdapter.java:52)
at org.apache.tinkerpop.shaded.kryo.Kryo.readObject(Kryo.java:686)
at org.apache.tinkerpop.gremlin.driver.ser.AbstractGryoMessageSerializerV3d0.deserializeResponse(AbstractGryoMessageSerializerV3d0.java:157)
... 36 common frames omitted
21:23:06.436 INFO - Checking for pending messages to complete before close on org.apache.tinkerpop.gremlin.driver.Connection$CheckForPending#147b072d
21:23:06.647 INFO - Checking for pending messages to complete before close on org.apache.tinkerpop.gremlin.driver.Connection$CheckForPending#1a6846d2
But i can see that Vertex is getting addedd in my Janus. but code is returning error.
Any guidance would be helpful.
Thanks
Variations of this question have come up before (here and here on StackOverflow and TINKERPOP-2372), but generally speaking it means that there is a serializer on the server that is no available on the client (or the other way around potentially though that is less common). In your case, you probably just need to add the JanusGraphIoRegistry to your driver configuration as shown in those links but for convenience and to directly answer your version of the question I'll use your Cluster builder setup:
GryoMapper mapper = GryoMapper.build().addRegistry(JanusGraphIoRegistry.INSTANCE).create();
Cluster cluster = Cluster.build().
addContactPoint("localhost").port(8182).
serializer(new GryoMessageSerializerV1d0(mapper)).
maxInProcessPerConnection(32).
maxSimultaneousUsagePerConnection(32).
maxContentLength(1000000).
maxWaitForConnection(10).
minConnectionPoolSize(5).
maxConnectionPoolSize(20).create();
I do find it odd that you are getting Gryo based errors but are using a GraphSON serializer in your code snippet. I'm assume that perhaps that's just a bad copy/paste situation. Note that you may wish to review the JanusGraph documentation that goes over this aspect of configuring the driver at: 7.4.2.1. Connecting to JanusGraph via Gremlin Server

logstash tcp input configuration using LogstashTcpSocketAppender

I'm new to logstash, elasticsearch and kibana.
I have the following configuration for the logback.xml and logstash.conf files, I'm facing this error:
[2015-06-17 22:47:14,726][INFO ][node ] [Brigade] version[1.6.0], pid[7208], build[cdd3ac4/2015-06-09T13:36:34Z]
[2015-06-17 22:47:14,727][INFO ][node ] [Brigade] initializing ...
[2015-06-17 22:47:14,731][INFO ][plugins ] [Brigade] loaded [], sites []
[2015-06-17 22:47:16,077][INFO ][env ] [Brigade] using [1] data paths, mounts [[OS (C:)]], net usable_space [363.9gb], net total_space [452.4gb], types [NTFS]
[2015-06-17 22:47:20,610][INFO ][node ] [Brigade] initialized
[2015-06-17 22:47:20,612][INFO ][node ] [Brigade] starting ...
[2015-06-17 22:47:21,106][INFO ][transport ] [Brigade] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.0.19:9300]}
[2015-06-17 22:47:21,495][INFO ][discovery ] [Brigade] elasticsearch/jiPxp6dpS4GWt7Nd9rui8A
[2015-06-17 22:47:25,399][INFO ][cluster.service ] [Brigade] new_master [Brigade][jiPxp6dpS4GWt7Nd9rui8A][CRTRINTECH01][inet[/192.168.0.19:9300]], reason: zen-disco-join (elected_as_master)
[2015-06-17 22:47:25,497][INFO ][gateway ] [Brigade] recovered [4] indices into cluster_state
[2015-06-17 22:47:26,067][INFO ][http ] [Brigade] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.0.19:9200]}
[2015-06-17 22:47:26,068][INFO ][node ] [Brigade] started
[2015-06-17 22:47:59,810][INFO ][cluster.service ] [Brigade] added {[logstash-CRTRINTECH01-1736-7936][59uCFeWxTImZlrlyy2_gpQ][CRTRINTECH01][inet[/192.168.0.19:9301]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[logstash-CRTRINTECH01-1736-7936][59uCFeWxTImZlrlyy2_gpQ][CRTRINTECH01][inet[/192.168.0.19:9301]]{data=false, client=true}])
[2015-06-17 22:49:02,168][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0xaee4d9f8, /192.168.0.19:50224 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,170][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0xec6423c2, /192.168.0.19:50235 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,172][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x231490be, /192.168.0.19:50228 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,170][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x7d6271fb, /192.168.0.19:50241 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,170][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x1c241c53, /192.168.0.19:50238 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,170][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0xf32ef395, /192.168.0.19:50237 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,169][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x2ac88eb2, /192.168.0.19:50227 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,169][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x3472722d, /192.168.0.19:50236 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,169][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x0659d284, /192.168.0.19:50239 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,169][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x0c3047e2, /192.168.0.19:50226 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,169][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x5570cb33, /192.168.0.19:50240 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,169][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x4fe4e2f3, /192.168.0.19:50223 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,281][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x209b3279, /192.168.0.19:50231 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,267][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0xcbe7acff, /192.168.0.19:50234 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,243][INFO ][cluster.service ] [Brigade] removed {[logstash-CRTRINTECH01-1736-7936][59uCFeWxTImZlrlyy2_gpQ][CRTRINTECH01][inet[/192.168.0.19:9301]]{data=false, client=true},}, reason: zen-disco-node_failed([logstash-CRTRINTECH01-1736-7936][59uCFeWxTImZlrlyy2_gpQ][CRTRINTECH01][inet[/192.168.0.19:9301]]{data=false, client=true}), reason transport disconnected
[2015-06-17 22:49:02,175][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0x9b3d6f83, /192.168.0.19:50243 => /192.168.0.19:9301]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-06-17 22:49:02,175][WARN ][transport.netty ] [Brigade] exception caught on transport layer [[id: 0xd650c49b, /192.168.0.19:50232 => /192.168.0.19:9300]], closing connection
java.io.IOException: Se ha forzado la interrupción de una conexión existente por el host remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Here is the configuration of the logback.xml file
<configuration scan="true" scanPeriod="30 seconds">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</layout>
</appender>
<appender name="STASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<remoteHost>localhost</remoteHost>
<port>4560</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="ALL">
<appender-ref ref="STDOUT" />
<appender-ref ref="STASH" />
</root>
</configuration>
Here is the logstash configuration
input {
tcp {
port => 4560
host => "localhost"
}
}
output {
elasticsearch {
host => "localhost"
port => 9300
index => "closecompliance"
manage_template => false
}
stdout {
}
}
And I uncommented the line related to transport.tcp.port: 9300 from the elasticsearch.yxml.
Also if I change the input from the logstash.conf file to be stdin instead of tcp I'm able to see the information in Kibana.
I'm using slf4j for logging.
Here is the main class I am running
package com.javacodegeeks.examples.logbackexample;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class BasicConfApp {
final static Logger logger = LoggerFactory.getLogger(BasicConfApp.class);
public static void main(String[] args) {
logger.info("Msg #1");
logger.warn("Msg #2");
logger.error("Msg #3");
logger.debug("Msg #4");
}
}
Thanks for all the help you can provide on this.
Regards,

unable to run kestrel on mono/linux

I try to run the sample ASP.NET5 MVC application from aspnet/Home sample. (kpm build runs without errors). When i try to run k kestrel i got the following error :
Mono: DllImport unable to load library 'libapi-ms-win-core-file-l1-2-0.dll: Kann die Shared-Object-Datei nicht öffnen: Datei oder Verzeichnis nicht gefunden'.
Mono: DllImport unable to load library 'libapi-ms-win-core-file-l1-2-0.dll: Kann die Shared-Object-Datei nicht öffnen: Datei oder Verzeichnis nicht gefunden'.
Stacktrace:
at <unknown> <0xffffffff>
at (wrapper managed-to-native) Microsoft.AspNet.Server.Kestrel.Networking.PlatformApis/LinuxApis.dlopen (string,int) <0xffffffff>
at Microsoft.AspNet.Server.Kestrel.Networking.PlatformApis/LinuxApis.LoadLibrary (string) <0x0001b>
at Microsoft.AspNet.Server.Kestrel.Networking.Libuv.Load (string) <0x0002e>
at Microsoft.AspNet.Server.Kestrel.KestrelEngine..ctor (Microsoft.Framework.Runtime.ILibraryManager) <0x0038f>
at Kestrel.ServerFactory.Start (Microsoft.AspNet.Builder.IServerInformation,System.Func`2<object, System.Threading.Tasks.Task>) <0x0011f>
at Microsoft.AspNet.Hosting.HostingEngine.Start (Microsoft.AspNet.Hosting.HostingContext) <0x001d4>
at Microsoft.AspNet.Hosting.Program.Main (string[]) <0x00312>
at (wrapper runtime-invoke) <Module>.runtime_invoke_void__this___object (object,intptr,intptr,intptr) <0xffffffff>
at <unknown> <0xffffffff>
at (wrapper managed-to-native) System.Reflection.MonoMethod.InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) <0xffffffff>
at System.Reflection.MonoMethod.Invoke (object,System.Reflection.BindingFlags,System.Reflection.Binder,object[],System.Globalization.CultureInfo) <0x000af>
at System.Reflection.MethodBase.Invoke (object,object[]) <0x00046>
at Microsoft.Framework.Runtime.Common.EntryPointExecutor.Execute (System.Reflection.Assembly,string[],System.IServiceProvider) <0x000f7>
at Microsoft.Framework.ApplicationHost.Program.ExecuteMain (Microsoft.Framework.Runtime.DefaultHost,string,string[]) <0x001eb>
at Microsoft.Framework.ApplicationHost.Program.Main (string[]) <0x0035b>
at (wrapper runtime-invoke) <Module>.runtime_invoke_object__this___object (object,intptr,intptr,intptr) <0xffffffff>
at <unknown> <0xffffffff>
at (wrapper managed-to-native) System.Reflection.MonoMethod.InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) <0xffffffff>
at System.Reflection.MonoMethod.Invoke (object,System.Reflection.BindingFlags,System.Reflection.Binder,object[],System.Globalization.CultureInfo) <0x000af>
at System.Reflection.MethodBase.Invoke (object,object[]) <0x00046>
at Microsoft.Framework.Runtime.Common.EntryPointExecutor.Execute (System.Reflection.Assembly,string[],System.IServiceProvider) <0x000f7>
at kre.host.Bootstrapper.Main (string[]) <0x002c3>
at (wrapper runtime-invoke) <Module>.runtime_invoke_object__this___object (object,intptr,intptr,intptr) <0xffffffff>
at <unknown> <0xffffffff>
at (wrapper managed-to-native) System.Reflection.MonoMethod.InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) <0xffffffff>
at System.Reflection.MonoMethod.Invoke (object,System.Reflection.BindingFlags,System.Reflection.Binder,object[],System.Globalization.CultureInfo) <0x000af>
at System.Reflection.MethodBase.Invoke (object,object[]) <0x00046>
at kre.hosting.RuntimeBootstrapper.ExecuteAsync (string[]) <0x0139f>
at kre.hosting.RuntimeBootstrapper.Execute (string[]) <0x0004b>
at EntryPoint.Main (string[]) <0x00143>
at (wrapper runtime-invoke) <Module>.runtime_invoke_int_object (object,intptr,intptr,intptr) <0xffffffff>
Native stacktrace:
mono(mono_handle_native_sigsegv+0xf3) [0x811b9b3]
mono(mono_arch_handle_altstack_exception+0xb4) [0x8167054]
mono(mono_sigsegv_signal_handler+0x107) [0x8096627]
[0xb779b40c]
/lib/ld-linux.so.2(+0xe9f6) [0xb77aa9f6]
/lib/ld-linux.so.2(+0x11ba8) [0xb77adba8]
/usr/lib/i386-linux-gnu/libdl.so(+0xc2b) [0xb4963c2b]
/lib/ld-linux.so.2(+0xdde6) [0xb77a9de6]
/usr/lib/i386-linux-gnu/libdl.so(+0x10bc) [0xb49640bc]
/usr/lib/i386-linux-gnu/libdl.so(dlopen+0x41) [0xb4963b61]
[0xb4987a0c]
[0xb4987994]
[0xb4987607]
[0xb4986f50]
[0xb4986918]
[0xb5272275]
[0xb562bc5b]
[0xb562b8ed]
mono() [0x8096101]
Debug info from gdb:
System is Debian 7.8 (x86)
libuv v1.0.0-rc2 installed and mono 4.1
I getting crazy on the try to get this running. I checked the permission for the libuv lib, the lib is readable by anyone.
I use the beta3 Version of aspnet5.
Has anybody an idea what goes wrong here? I am searching for an hint where to look for the Problem.
if you have mono 4.1, install dnvm instead of k. Mono 3.12 works with k, 4.1 with dnvm as far as i know

Error install OpenLdap for RedHat6(checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif")

I tried to install OpenLdap for linux redhat6, but i recive an error and looks like this
"5511c732 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={1}monitor.ldif"
below is the code
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 03c4de5f
dn: olcDatabase={1}monitor
objectClass: olcDatabaseConfig
olcDatabase: {1}monitor
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=externa
l,cn=auth" read by dn.base="cn=Manager,dc=my-domain,dc=com" read by * none
olcAddContentAcl: FALSE
olcLastMod: TRUE
olcMaxDerefDepth: 15
olcReadOnly: FALSE
olcSyncUseSubentry: FALSE
olcMonitoring: FALSE
structuralObjectClass: olcDatabaseConfig
entryUUID: 7f788d0a-66a8-1034-968a-61cac64128b9
creatorsName: cn=config
createTimestamp: 20150324193414Z
entryCSN: 20150324193414.304614Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20150324193414Z
and
5511c732 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif"
below is the code :
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 dd2c457a
dn: olcDatabase={2}bdb
objectClass: olcDatabaseConfig
objectClass: olcBdbConfig
olcDatabase: {2}bdb
olcSuffix: dc=example,dc=com
olcAddContentAcl: FALSE
olcLastMod: TRUE
olcMaxDerefDepth: 15
olcReadOnly: FALSE
olcRootDN: cn=Manager,dc=example,dc=com
olcSyncUseSubentry: FALSE
olcMonitoring: TRUE
olcDbDirectory: /var/lib/ldap
olcDbCacheSize: 1000
olcDbCheckpoint: 1024 15
olcDbNoSync: FALSE
olcDbDirtyRead: FALSE
olcDbIDLcacheSize: 0
olcDbIndex: objectClass pres,eq
olcDbIndex: cn pres,eq,sub
olcDbIndex: uid pres,eq,sub
olcDbIndex: uidNumber pres,eq
olcDbIndex: gidNumber pres,eq
olcDbIndex: ou pres,eq,sub
olcDbIndex: mail pres,eq,sub
olcDbIndex: sn pres,eq,sub
olcDbIndex: givenName pres,eq,sub
olcDbIndex: memberUid pres,eq,sub
olcDbIndex: loginShell pres,eq
olcDbIndex: nisMapName pres,eq,sub
olcDbIndex: nisMapEntry pres,eq,sub
olcDbLinearIndex: FALSE
olcDbMode: 0600
olcDbSearchStack: 16
olcDbShmKey: 0
olcDbCacheFree: 1
olcDbDNcacheSize: 0
structuralObjectClass: olcBdbConfig
entryUUID: 7f7892aa-66a8-1034-968b-61cac64128b9
creatorsName: cn=config
createTimestamp: 20150324193414Z
entryCSN: 20150324193414.304614Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20150324193414Z
olcRootPW: {SSHA}dGaM0fyxrjotXLEKz8Jjl5yoBhpNxLXX
olcTLSCertificateFile: /etc/pki/tls/certs/example.pem
olcTLSCertificateKeyFile: /etc/pki/tls/certs/examplekey.pem
At first error I had modified dn.base="cn=Manager,dc=my-domain,dc=com" =>Manager was with low letter dn.base="cn=manager,dc=my-domain,dc=com"
Second error: - olcSuffix: dc=example,dc=com => was olcSuffix: dc=my-domain,dc=com
- olcRootPW: {SSHA}dGaM0fyxrjotXLEKz8Jjl5yoBhpNxLXX (add)
- olcTLSCertificateFile: /etc/pki/tls/certs/example.pem (add)
- olcTLSCertificateKeyFile: /etc/pki/tls/certs/examplekey.pem(add)
Try the below settings:
vim /etc/profile
press SHIFT + g key combination to go to EOF and add export LC_ALL="en_US.UTF-8"
source /etc/profile

Resources