Neo4j Server certificate is not trusted - encryption

I have just set up my Neo4j server on a VM on Google Cloud, I'm using Enterprise version 4.1.1, and I've have finished following the great post (here) by David Allen about how to get a certificate with LetsEncrypt.
This has all worked perfectly and I now have a fully secure Neo4j server that I can access through the browser (MYDOMAIN.COM:7473/browser) using my hostname. However, I am now having issues getting my application to connect to the server using the javascript driver.
I keep getting the following error:
Failed to connect to server. Please ensure that your database is
listening on the correct host and port and that you have compatible
encryption settings both on Neo4j server and driver. Note that the
default encryption setting has changed in Neo4j 4.0. Caused by: Server
certificate is not trusted. If you trust the database you are
connecting to, use TRUST_CUSTOM_CA_SIGNED_CERTIFICATES and add the
signing certificate, or the server certificate, to the list of
certificates trusted by this driver using neo4j.driver(.., {
trustedCertificates:['path/to/certificate.crt']}). This is a security
measure to protect against man-in-the-middle attacks. If you are just
trying Neo4j out and are not concerned about encryption, simply
disable it using encrypted="ENCRYPTION_OFF" in the driver options.
Socket responded with: ERR_TLS_CERT_ALTNAME_INVALID
I have read through the driver documentation (here) and I have added both the trust: "TRUST_CUSTOM_CA_SIGNED_CERTIFICATES" and trustedCertificates:[] settings. I downloaded all of the certificates from my server (cert.pem, chain.pem, fullchain.pem and privacy.pem) and linked to them in the trustedCertificates setting.
Unfortunately I'm still getting the same error. For reference, this is how my driver is currently configured:
// This module can be used to serve the GraphQL endpoint
// as a lambda function
const { ApolloServer } = require('apollo-server-lambda')
const { makeAugmentedSchema } = require('neo4j-graphql-js')
const neo4j = require('neo4j-driver')
// This module is copied during the build step
// Be sure to run `npm run build`
const { typeDefs } = require('./graphql-schema')
const driver = neo4j.driver(
process.env.NEO4J_URI,
neo4j.auth.basic(
process.env.NEO4J_USER,
process.env.NEO4J_PASSWORD
),
{
encrypted: process.env.NEO4J_ENCRYPTED ? 'ENCRYPTION_ON' : 'ENCRYPTION_OFF',
trust: "TRUST_CUSTOM_CA_SIGNED_CERTIFICATES",
trustedCertificates: ['../../certificates/cert.pem', '../../certificates/chain.pem', '../../certificates/fullchain.pem', '../../certificates/privkey.pem'],
logging: {
level: 'debug',
logger: (level, message) => console.log(level + ' ' + message)
},
}
)
const server = new ApolloServer({
schema: makeAugmentedSchema({ typeDefs }),
context: { driver, neo4jDatabase: process.env.NEO4J_DATABASE },
introspection: true,
playground: true,
})
exports.handler = server.createHandler()
I'm using the latest build of the driver, v2.14.4 and have enabled full logging but I'm not getting any more information than the above. I just can't figure out what I'm doing wrong - does anyone have any ideas?

I found a solution to this problem - I had a look at the documentation (here)and found that I needed to update my NEO4J_URI from bolt://SO.ME.IP.ADDRESS:7687 to neo4j://MYDOMAIN.COM:7687. Now I've done this all is working as expected.

Related

Ansible Hetzner Cloud - Create a server in private network

I am using Ansible to create a server in the Hetzner Cloud, the playbook reads:
- name: create the server at Hetzner
hetzner.hcloud.hcloud_server:
name: "{{server_hostname}}"
enable_ipv4: false
enable_ipv6: false
server_type: cx11
location: "{{server_location}}"
image: ubuntu-22.04
ssh_keys:
- "mykey"
state: present
api_token: "{{hetzner_secret}}"
private_networks: ipfire
register: server
My aim is to integrate the new server into the private network named 'ipfire' that I have previously created. The server should not be accessible via the internet, so I have disabled ipv4 and ipv6. Rather, I'd like to access the server by connecting via OpenVPN to the private network 'ipfire' and connect by use of ssh from there.
Unfortunately, I get an error message as follows:
PLAY [Order servers] ********************************************************************************************************
TASK [hetznerserver : create the server at Hetzner] *************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (hetzner.hcloud.hcloud_server) module: private_networks. Supported parameters include: rebuild_protection, api_token, location, enable_ipv6, upgrade_disk, ipv4, endpoint, ipv6, firewalls, server_type, state, force, labels, ssh_keys, delete_protection, image, id, name, enable_ipv4, placement_group, force_upgrade, user_data, datacenter, rescue_mode, allow_deprecated_image, volumes, backups."}
PLAY RECAP ******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The module private_networks does not seem to work like this?
Error messages like Unsupported parameters for (<moduleName>) module: <givenParameter>. Supported parameters include: <supportedParametersList> are usually syntax errors of the module used.
Therefore one may need to look up the respective documentation, in the example case hcloud_server module – Create and manage cloud servers on the Hetzner Cloud.
If the documentation shows the Parameters in question are available, it indicates
either a version mismatch of module used, means the used version is too old and an update is necessary
or an bug within the module code and further debugging and investigation within the module code is necessary
Code and Documentation Links
Community Authors> hetzner> hcloud
ansible-collections / hetzner.hcloud
After further investigation it might turn out that the parameter in question was introduced recently, in example
Github hetzner.hcloud Issue #150 "Unable to create cloud server without public ipv4 and ipv6"
Github hetzner.hcloud Pull #160 "Add possibility to specify private network when creating or updating servers"
which indicates in your example case that you'll need to update the Ansible Collection module in question since the parameter wasn't introduced in your used version of the module but as of v1.9.0.

Problem with Listening gRPC Requests Over HTTP on Cloud Run

I was running my gRPC services on Cloud Run without any problem. But today, I realized they are no longer working over HTTP including the services that have no change for a long time.
The exception is The request :scheme header 'https' does not match the transport scheme 'http'."
So, is there any change on Cloud Run side or is there anything that I am missing?
Update: If I change the code to receive requests over HTTPS, probably they will work(not tested yet). But, it is not the point. They were running without any issue before.
Update2: I implemented the Program.cs and the docker file as explained on there https://cloud.google.com/run/docs/quickstarts/build-and-deploy/c-sharp and this is not working too.
Update3: Same with that sample project. https://github.com/turgayozgur/dotnet-hello-world-grpc The sample application isn't expecting HTTPS on :scheme header. Why cloud run set that header as HTTPS even the request between Cloud Run and application is not an HTTPS request?
Similar issues:
https://github.com/linkerd/linkerd/issues/2401
https://www.gitmemory.com/issue/dotnet/aspnetcore/30532/787011248
Here's a workaround that worked for me:
Download the aspnetcore repo, I use v5 so git clone https://github.com/dotnet/aspnetcore.git && cd aspnetcore && git checkout tags/v5.0.5
Find the file /src/Servers/Kestrel/Core/src/Internal/Http2/Http2Stream.cs and comment out lines 244-250. If not version 5.0.5, it's the block of code that starts with if (!ReferenceEquals(headerScheme, Scheme) &&, comment that whole if block
build the project - ./build.sh --configuration Release in the root of the project. It had a few errors but the file I needed was built.
Take file aspnetcore/artifacts/bin/Microsoft.AspNetCore.Server.Kestrel.Core/Release/net5.0/Microsoft.AspNetCore.Server.Kestrel.Core.dll and copy it to a folder with the following Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0
COPY ./Microsoft.AspNetCore.Server.Kestrel.Core.dll /usr/share/dotnet/shared/Microsoft.AspNetCore.App/5.0.5/
A bit of a pain, but we're back up on Cloud Run with this ...
Edit: check wlhe’s comment, it seems there is a bug in aspnetcore that broke this. Reverting the version can help.
Applications serving on Cloud Run has to serve unencrypted HTTP traffic (and in grpc case since it uses http2, this is called "h2c"). Do not try to terminate TLS in your application, because you won't be getting any encrypted requests.
Encryption is terminated/added by the infrastructure and your service does not see it.
To get a sample C# gRPC application working on Cloud Run.
clone https://github.com/grpc/grpc repository
go to grpc/examples/csharp/RouteGuide
update /RouteGuide.csproj like this
- <Protobuf Include="..\..\..\protos\route_guide.proto" Link="protos\route_guide.proto" />
+ <Protobuf Include="protos\route_guide.proto" />
go into RouteGuideServer
change the program to listen on PORT env var or 8080, also on all interfaces (e.g. * or 0.0.0.0). and delete that weird interactive read from terminal prompt:
diff --git a/examples/csharp/RouteGuide/RouteGuideServer/Program.cs b/examples/csharp/RouteGuide/RouteGuideServer/Program.cs
index dff6486e59..a32aff7222 100644
--- a/examples/csharp/RouteGuide/RouteGuideServer/Program.cs
+++ b/examples/csharp/RouteGuide/RouteGuideServer/Program.cs
## -17,6 +17,7 ## using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
+using System.Threading;
using System.Threading.Tasks;
namespace Routeguide
## -25,22 +26,18 ## namespace Routeguide
{
static void Main(string[] args)
{
- const int Port = 30052;
+ const int Port = 8080;
var features = RouteGuideUtil.LoadFeatures();
Server server = new Server
{
Services = { RouteGuide.BindService(new RouteGuideImpl(features)) },
- Ports = { new ServerPort("localhost", Port, ServerCredentials.Insecure) }
+ Ports = { new ServerPort("*", Port, ServerCredentials.Insecure) }
};
server.Start();
- Console.WriteLine("RouteGuide server listening on port " + Port);
- Console.WriteLine("Press any key to stop the server...");
- Console.ReadKey();
-
- server.ShutdownAsync().Wait();
+ Thread.Sleep(Timeout.Infinite);
}
}
}
gcloud beta run deploy routeguide --source=. --allow-unauthenticated
Your gRPC service is now publicly available and listening on URL:443.

Sequelize-cli returns "Unknown Database" when doing migrations

I have been using sequelize migration all this while with no issue,
for example in our development server:
"development": {
"username": "root",
"password": "password",
"database": "db",
"host": "127.0.0.1",
"dialect": "mysql"
}
using sequelize-cli will works fine:
npx sequelize db:migrate
results:
Sequelize CLI [Node: 12.16.1, CLI: 6.2.0, ORM: 6.3.5]
Loaded configuration file "config\config.json".
Using environment "development".
No migrations were executed, database schema was already up to date.
Same goes for our production server, which db is on different server than app:
"production": {
"username": "root",
"password": "password",
"database": "db",
"host": "172.xx.xx.11",
"dialect": "mysql"
}
So recently we have upgraded our production server to have 3 db servers using mariadb, managed by a load balancer (maxscale), a galera cluster or something, using the same setup as previous, so now its something like:
server a: 172.xx.xx.11,
server b: 172.xx.xx.12,
server c: 172.xx.xx.13,
load balancer: 172.xx.xx.10
our new config is like:
"production": {
"username": "root",
"password": "password",
"database": "db",
"host": "172.xx.xx.10",
"dialect": "mysql"
}
there is no firewall open between app server and db server directly, only app server to the load balancer.
testing connection between app server and the load balancer with sequelize seems to have no issue,
can pass through if username and password is correct,
if wrong username, or wrong password will give
ERROR: Access denied for user 'root'#'172.xx.xx.10' (using password: YES)
no issue there. just saying that there is a connection.
then there is no issue also using:
npx sequelize db:drop
or
npx sequelize db:create
resulting in
Sequelize CLI [Node: 12.16.1, CLI: 6.2.0, ORM: 6.3.5]
Loaded configuration file "config\config.json".
Using environment "production".
Database db created.
Verifying in all our db servers that the database did dropped and created.
But when i tried doing migrations, this happens:
Sequelize CLI [Node: 12.16.1, CLI: 6.2.0, ORM: 6.3.5]
Loaded configuration file "config\config.json".
Using environment "production".
ERROR: Unknown database 'db'
I have verified that all our db servers did have that 'db' database, its even created by sequelize based on the config, but somehow sequelize cant seems to recognize or identified that 'db' database.
Please help if you have any experience like this before, and do let me know if you need more info.
Thanks.
You can enable the verbose log level in MaxScale by adding log_info=true under the [maxscale] section. This should help explain what is going on and why it is failing.
It is possible that Sequelize does something that assumes it's working with the same database server. For example, doing an INSERT and immediately reading the inserted value will always work on a single server but with a distributed setup, it's possible the values haven't replicated to all nodes.
If you can't find an explanation as to why it behaves like this or you think MaxScale is doing something wrong, please open a bug report on the MariaDB Jira under the MaxScale project.
Turns out the maxscale user don't have enough privileges. granting SHOW DATABASES privileges to maxscale user fixed my issue.
more info:
https://mariadb.com/kb/en/mariadb-maxscale-14/maxscale-configuration-usage-scenarios/#service
Related issue on MariaDB Jira

Redis - ERR unknown command 'EVAL'

I am trying to use redis cache (Microsoft.Extensions.Caching.Redis) with .net core 2.1 and for that purpose I had followed this tutorial https://dotnetcoretutorials.com/2017/01/06/using-redis-cache-net-core/ Now the issue is when I am trying to Get data using _distributedCache.GetStringAsync(key) I am getting this error "ERR unknown command 'EVAL'". I have also searched for this kind of error and found that this could happen because of older version of Redis but I am using latest version of Microsoft.Extension.Caching.Redis (version 2.1.1)
Here is my code:
public async Task<string> RetrieveCache(string key)
{
var data = await _distributedCache.GetStringAsync(key);
if (string.IsNullOrWhiteSpace(data))
return "";
return data;
}
appsettings.json:
"RedisServer": {
"Server": "12.66.909.61:6379,password=pwd",
"InstanceName": "Store.Toys"
}
and startup.cs
services.AddDistributedRedisCache(option =>
{
option.Configuration = Configuration["RedisServer:Server"];
option.InstanceName = Configuration["RedisServer:InstanceName"];
});
any help?
The server needs to support the feature; it sounds like the server that you're targeting lacks support for this feature.
Eval command according to documentation is supported since redis server version 2.6.
You can find out what version do you have now on your remote server by
$ telnet 12.66.909.61 6379
#and type
info
or using redis client - redis-cli -h 12.66.909.61 -p 6379 -a pwd info
You will get
# Server
redis_version:2.8.24
...
Then you will need to upgrade redis-server package on your server upto 2.6

Why does Meteor Up (MUP) fail on authentication?

I am currently trying to deploy a Meteor project to an external server for the first time. The server is hosted by DigitalOcean, running ubuntu 16.04, and has an SSH key set up for password-free access.
The error I am getting from MUP is:
[159.203.165.13] - Setup Docker
events.js:165
throw er; // Unhandled 'error' event
^
Error: All configured authentication methods failed
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
at SSH2Stream.emit (events.js:180:13)
at parsePacket (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:185:10)
at SSH2Stream._read (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:173:12)
at doWrite (_stream_writable.js:410:12)
at writeOrBuffer (_stream_writable.js:396:5)
at SSH2Stream.Writable.write (_stream_writable.js:294:11)
at Socket.ondata (_stream_readable.js:651:20)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:274:12)
at readableAddChunk (_stream_readable.js:261:11)
at Socket.Readable.push (_stream_readable.js:218:10)
Emitted 'error' event at:
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:292:12)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
[... lines matching original stack trace ...]
at Socket.Readable.push (_stream_readable.js:218:10)
At this point I have tried several solutions involving the mup file as per other recommendations such as:
1) Adding in a password - Gives the exact same error as though the change didn't occur.
2) Adding in the same SSH key that I use for authentication to the server as per digital ocean - Says 'privateKey value does not contain a (valid) private key'. I have tried both the key that is used for authentication to the server and every other key I could find short of generating a new one just for Meteor's use.
3) Leaving both blank and allowing it to 'try' ssh-agent - pretends it doesn't know what ssh-agent is and throws an error saying the same thing as when I use a password.
I have looked through and followed the same instructions in the following article: http://meteortips.com/deployment-tutorial/digitalocean-part-1/
This article assumes that there are only two possible states. One being that an ssh key has NOT been used or set up so it needs to be generated. The second being that an ssh key exists and is set up exactly where they expect it. Unfortunately I seem to be in a different situation. I generated a key using putty prior to setting up the D.O server and created the droplet using that. After creation, the file did not exist. The only thing in the ~/.ssh/ directory was a single file named "authorized_keys" that held the key I would use to connect to the server. This file cannot be used, nor any file on the server in the other ssh key locations.I also tried copying over the file directly onto the server to no avail as well.
In some vain hope at finding a solution I also tried running these same commands in both the Meteor build bundle an the source code folder. Neither worked. I should mention that although this is the only article I still have open to try for a solution, I have tried every one I could find using MUP.
If anyone can point me in the right direction with this so I can stop flailing wildly in the dark I would be incredibly grateful.
Edit: As requested, below is the current mup.js file with removed credentials
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '111.111.111.11',
username: 'root',
// ssh-agent: '/home/Meteor/MeteorKey.pem'
pem: '~/.ssh/id_rsa.pub'
// password: 'password1'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app-name',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://www.app-name.com',
MONGO_URL: 'mongodb://mongodb/meteor',
MONGO_OPLOG_URL: 'mongodb://mongodb/local',
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
},
// (Optional)
// Use the proxy to setup ssl or to route requests to the correct
// app when there are several apps
// proxy: {
// domains: 'mywebsite.com,www.mywebsite.com',
The error message you are receiving:
Error: All configured authentication methods failed
Means that the SSH connection is failing. So the credentials you are using (pity you removed them from the config) are not working. Try using a command line ssh using these same credentials, and then trouble shoot that - once you can ssh into the server, then mup should be able to do it's work.
You can get more information out of ssh by specifying one or more -v parameters, eg:
ssh -v -v my_user#remote.com
and it will give you information about the authentication methods it is trying as it goes through them. This will help you narrow down the problem.

Resources