Redis - ERR unknown command 'EVAL' - asp.net

I am trying to use redis cache (Microsoft.Extensions.Caching.Redis) with .net core 2.1 and for that purpose I had followed this tutorial https://dotnetcoretutorials.com/2017/01/06/using-redis-cache-net-core/ Now the issue is when I am trying to Get data using _distributedCache.GetStringAsync(key) I am getting this error "ERR unknown command 'EVAL'". I have also searched for this kind of error and found that this could happen because of older version of Redis but I am using latest version of Microsoft.Extension.Caching.Redis (version 2.1.1)
Here is my code:
public async Task<string> RetrieveCache(string key)
{
var data = await _distributedCache.GetStringAsync(key);
if (string.IsNullOrWhiteSpace(data))
return "";
return data;
}
appsettings.json:
"RedisServer": {
"Server": "12.66.909.61:6379,password=pwd",
"InstanceName": "Store.Toys"
}
and startup.cs
services.AddDistributedRedisCache(option =>
{
option.Configuration = Configuration["RedisServer:Server"];
option.InstanceName = Configuration["RedisServer:InstanceName"];
});
any help?

The server needs to support the feature; it sounds like the server that you're targeting lacks support for this feature.

Eval command according to documentation is supported since redis server version 2.6.
You can find out what version do you have now on your remote server by
$ telnet 12.66.909.61 6379
#and type
info
or using redis client - redis-cli -h 12.66.909.61 -p 6379 -a pwd info
You will get
# Server
redis_version:2.8.24
...
Then you will need to upgrade redis-server package on your server upto 2.6

Related

Problem with Listening gRPC Requests Over HTTP on Cloud Run

I was running my gRPC services on Cloud Run without any problem. But today, I realized they are no longer working over HTTP including the services that have no change for a long time.
The exception is The request :scheme header 'https' does not match the transport scheme 'http'."
So, is there any change on Cloud Run side or is there anything that I am missing?
Update: If I change the code to receive requests over HTTPS, probably they will work(not tested yet). But, it is not the point. They were running without any issue before.
Update2: I implemented the Program.cs and the docker file as explained on there https://cloud.google.com/run/docs/quickstarts/build-and-deploy/c-sharp and this is not working too.
Update3: Same with that sample project. https://github.com/turgayozgur/dotnet-hello-world-grpc The sample application isn't expecting HTTPS on :scheme header. Why cloud run set that header as HTTPS even the request between Cloud Run and application is not an HTTPS request?
Similar issues:
https://github.com/linkerd/linkerd/issues/2401
https://www.gitmemory.com/issue/dotnet/aspnetcore/30532/787011248
Here's a workaround that worked for me:
Download the aspnetcore repo, I use v5 so git clone https://github.com/dotnet/aspnetcore.git && cd aspnetcore && git checkout tags/v5.0.5
Find the file /src/Servers/Kestrel/Core/src/Internal/Http2/Http2Stream.cs and comment out lines 244-250. If not version 5.0.5, it's the block of code that starts with if (!ReferenceEquals(headerScheme, Scheme) &&, comment that whole if block
build the project - ./build.sh --configuration Release in the root of the project. It had a few errors but the file I needed was built.
Take file aspnetcore/artifacts/bin/Microsoft.AspNetCore.Server.Kestrel.Core/Release/net5.0/Microsoft.AspNetCore.Server.Kestrel.Core.dll and copy it to a folder with the following Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0
COPY ./Microsoft.AspNetCore.Server.Kestrel.Core.dll /usr/share/dotnet/shared/Microsoft.AspNetCore.App/5.0.5/
A bit of a pain, but we're back up on Cloud Run with this ...
Edit: check wlhe’s comment, it seems there is a bug in aspnetcore that broke this. Reverting the version can help.
Applications serving on Cloud Run has to serve unencrypted HTTP traffic (and in grpc case since it uses http2, this is called "h2c"). Do not try to terminate TLS in your application, because you won't be getting any encrypted requests.
Encryption is terminated/added by the infrastructure and your service does not see it.
To get a sample C# gRPC application working on Cloud Run.
clone https://github.com/grpc/grpc repository
go to grpc/examples/csharp/RouteGuide
update /RouteGuide.csproj like this
- <Protobuf Include="..\..\..\protos\route_guide.proto" Link="protos\route_guide.proto" />
+ <Protobuf Include="protos\route_guide.proto" />
go into RouteGuideServer
change the program to listen on PORT env var or 8080, also on all interfaces (e.g. * or 0.0.0.0). and delete that weird interactive read from terminal prompt:
diff --git a/examples/csharp/RouteGuide/RouteGuideServer/Program.cs b/examples/csharp/RouteGuide/RouteGuideServer/Program.cs
index dff6486e59..a32aff7222 100644
--- a/examples/csharp/RouteGuide/RouteGuideServer/Program.cs
+++ b/examples/csharp/RouteGuide/RouteGuideServer/Program.cs
## -17,6 +17,7 ## using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
+using System.Threading;
using System.Threading.Tasks;
namespace Routeguide
## -25,22 +26,18 ## namespace Routeguide
{
static void Main(string[] args)
{
- const int Port = 30052;
+ const int Port = 8080;
var features = RouteGuideUtil.LoadFeatures();
Server server = new Server
{
Services = { RouteGuide.BindService(new RouteGuideImpl(features)) },
- Ports = { new ServerPort("localhost", Port, ServerCredentials.Insecure) }
+ Ports = { new ServerPort("*", Port, ServerCredentials.Insecure) }
};
server.Start();
- Console.WriteLine("RouteGuide server listening on port " + Port);
- Console.WriteLine("Press any key to stop the server...");
- Console.ReadKey();
-
- server.ShutdownAsync().Wait();
+ Thread.Sleep(Timeout.Infinite);
}
}
}
gcloud beta run deploy routeguide --source=. --allow-unauthenticated
Your gRPC service is now publicly available and listening on URL:443.

Neo4j Server certificate is not trusted

I have just set up my Neo4j server on a VM on Google Cloud, I'm using Enterprise version 4.1.1, and I've have finished following the great post (here) by David Allen about how to get a certificate with LetsEncrypt.
This has all worked perfectly and I now have a fully secure Neo4j server that I can access through the browser (MYDOMAIN.COM:7473/browser) using my hostname. However, I am now having issues getting my application to connect to the server using the javascript driver.
I keep getting the following error:
Failed to connect to server. Please ensure that your database is
listening on the correct host and port and that you have compatible
encryption settings both on Neo4j server and driver. Note that the
default encryption setting has changed in Neo4j 4.0. Caused by: Server
certificate is not trusted. If you trust the database you are
connecting to, use TRUST_CUSTOM_CA_SIGNED_CERTIFICATES and add the
signing certificate, or the server certificate, to the list of
certificates trusted by this driver using neo4j.driver(.., {
trustedCertificates:['path/to/certificate.crt']}). This is a security
measure to protect against man-in-the-middle attacks. If you are just
trying Neo4j out and are not concerned about encryption, simply
disable it using encrypted="ENCRYPTION_OFF" in the driver options.
Socket responded with: ERR_TLS_CERT_ALTNAME_INVALID
I have read through the driver documentation (here) and I have added both the trust: "TRUST_CUSTOM_CA_SIGNED_CERTIFICATES" and trustedCertificates:[] settings. I downloaded all of the certificates from my server (cert.pem, chain.pem, fullchain.pem and privacy.pem) and linked to them in the trustedCertificates setting.
Unfortunately I'm still getting the same error. For reference, this is how my driver is currently configured:
// This module can be used to serve the GraphQL endpoint
// as a lambda function
const { ApolloServer } = require('apollo-server-lambda')
const { makeAugmentedSchema } = require('neo4j-graphql-js')
const neo4j = require('neo4j-driver')
// This module is copied during the build step
// Be sure to run `npm run build`
const { typeDefs } = require('./graphql-schema')
const driver = neo4j.driver(
process.env.NEO4J_URI,
neo4j.auth.basic(
process.env.NEO4J_USER,
process.env.NEO4J_PASSWORD
),
{
encrypted: process.env.NEO4J_ENCRYPTED ? 'ENCRYPTION_ON' : 'ENCRYPTION_OFF',
trust: "TRUST_CUSTOM_CA_SIGNED_CERTIFICATES",
trustedCertificates: ['../../certificates/cert.pem', '../../certificates/chain.pem', '../../certificates/fullchain.pem', '../../certificates/privkey.pem'],
logging: {
level: 'debug',
logger: (level, message) => console.log(level + ' ' + message)
},
}
)
const server = new ApolloServer({
schema: makeAugmentedSchema({ typeDefs }),
context: { driver, neo4jDatabase: process.env.NEO4J_DATABASE },
introspection: true,
playground: true,
})
exports.handler = server.createHandler()
I'm using the latest build of the driver, v2.14.4 and have enabled full logging but I'm not getting any more information than the above. I just can't figure out what I'm doing wrong - does anyone have any ideas?
I found a solution to this problem - I had a look at the documentation (here)and found that I needed to update my NEO4J_URI from bolt://SO.ME.IP.ADDRESS:7687 to neo4j://MYDOMAIN.COM:7687. Now I've done this all is working as expected.

Application is unable to connect to localstack SQS

As a test developer, I am trying to use localstack to mock SQS for Integration Test.
Docker-compose:
localstack:
image: localstack/localstack:0.8.7
ports:
- "4567-4583:4567-4583"
- "9898:${PORT_WEB_UI-8080}"
environment:
- SERVICES=sqs
- DOCKER_HOST=unix:///var/run/docker.sock
- HOSTNAME=localstack
- HOSTNAME_EXTERNAL=192.168.99.101
- DEFAULT_REGION=us-east-1
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
After spinning up the localstack SQS: Able to connect and create queue and able to retrieve it via AWS CLI. Localstack Dashboard also displays the queue created.
$ aws --endpoint-url=http://192.168.99.101:4576 --region=us-west-1 sqs create-queue --queue-name myqueue
{
"QueueUrl": "http://192.168.99.101:4576/queue/myqueue"
}
The application is using com.amazon.sqs.javamessaging.SQSConnectionFactory to connect to SQS. It also uses com.amazonaws.auth.DefaultAWSCredentialsProviderChain for the AWS credentials
1) If I give "-e AWS_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=foobar -e AWS_SECRET_ACCESS_KEY=foobar" while bringing up the application, I am getting
HTTPStatusCode: 403 AmazonErrorCode: InvalidClientTokenId
com.amazonaws.services.sqs.model.AmazonSQSException: The security token included in the request is invalid
2) If I give the ACCESS_KEY and SECRET_KEY of the AWS SQS, I am getting
HTTPStatusCode: 400 AmazonErrorCode: AWS.SimpleQueueService.NonExistentQueue
com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version.
Below is the application code. The first 2 log messages are printing the connection and session it obtained. The error is coming from "Queue publisherQueue = sqsSession.createQueue(sqsName);"
sqsConnection = (Connection) context.getBean("outputSQSConnection");
LOGGER.info("SQS connection Obtained " + sqsConnection);
sqsSession = sqsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
LOGGER.info("SQS Session Created " + sqsSession);
Queue publisherQueue = sqsSession.createQueue(sqsName);
I tried both "http://localstack:4576/queue/myqueue" "http://192.168.99.101:4576/queue/myqueue". The results are same.
Can you please help me to resolve the issue?
I ran into a similar issue couple of weeks back . Looking at your config, I think you should be just able to use localhost. In my case, we had services calling localstack also running in docker and we ended up creating a docker network to communicate between containers.
I was able figure out a solution by looking at the Localstack tests. The important thing to note here is that you need to set Endpoint configuration correctly.
private AmazonSQSClient getLocalStackConfiguredClient() {
AmazonSQSClientBuilder clientBuilder = AmazonSQSClientBuilder.standard();
clientBuilder.withEndpointConfiguration(getEndpointConfiguration(configuration.getLocalStackConfiguration()
.getSqsEndpoint(), awsRegion));
return (AmazonSQSClient)clientBuilder.build();
}
private AwsClientBuilder.EndpointConfiguration getEndpointConfiguration(String endpoint, Regions awsRegion) {
return new AwsClientBuilder.EndpointConfiguration(endpoint, awsRegion.getName());
}
Hopefully this helps you to resolve the issues.

Why does Meteor Up (MUP) fail on authentication?

I am currently trying to deploy a Meteor project to an external server for the first time. The server is hosted by DigitalOcean, running ubuntu 16.04, and has an SSH key set up for password-free access.
The error I am getting from MUP is:
[159.203.165.13] - Setup Docker
events.js:165
throw er; // Unhandled 'error' event
^
Error: All configured authentication methods failed
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
at SSH2Stream.emit (events.js:180:13)
at parsePacket (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:185:10)
at SSH2Stream._read (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:173:12)
at doWrite (_stream_writable.js:410:12)
at writeOrBuffer (_stream_writable.js:396:5)
at SSH2Stream.Writable.write (_stream_writable.js:294:11)
at Socket.ondata (_stream_readable.js:651:20)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:274:12)
at readableAddChunk (_stream_readable.js:261:11)
at Socket.Readable.push (_stream_readable.js:218:10)
Emitted 'error' event at:
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:292:12)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
[... lines matching original stack trace ...]
at Socket.Readable.push (_stream_readable.js:218:10)
At this point I have tried several solutions involving the mup file as per other recommendations such as:
1) Adding in a password - Gives the exact same error as though the change didn't occur.
2) Adding in the same SSH key that I use for authentication to the server as per digital ocean - Says 'privateKey value does not contain a (valid) private key'. I have tried both the key that is used for authentication to the server and every other key I could find short of generating a new one just for Meteor's use.
3) Leaving both blank and allowing it to 'try' ssh-agent - pretends it doesn't know what ssh-agent is and throws an error saying the same thing as when I use a password.
I have looked through and followed the same instructions in the following article: http://meteortips.com/deployment-tutorial/digitalocean-part-1/
This article assumes that there are only two possible states. One being that an ssh key has NOT been used or set up so it needs to be generated. The second being that an ssh key exists and is set up exactly where they expect it. Unfortunately I seem to be in a different situation. I generated a key using putty prior to setting up the D.O server and created the droplet using that. After creation, the file did not exist. The only thing in the ~/.ssh/ directory was a single file named "authorized_keys" that held the key I would use to connect to the server. This file cannot be used, nor any file on the server in the other ssh key locations.I also tried copying over the file directly onto the server to no avail as well.
In some vain hope at finding a solution I also tried running these same commands in both the Meteor build bundle an the source code folder. Neither worked. I should mention that although this is the only article I still have open to try for a solution, I have tried every one I could find using MUP.
If anyone can point me in the right direction with this so I can stop flailing wildly in the dark I would be incredibly grateful.
Edit: As requested, below is the current mup.js file with removed credentials
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '111.111.111.11',
username: 'root',
// ssh-agent: '/home/Meteor/MeteorKey.pem'
pem: '~/.ssh/id_rsa.pub'
// password: 'password1'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app-name',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://www.app-name.com',
MONGO_URL: 'mongodb://mongodb/meteor',
MONGO_OPLOG_URL: 'mongodb://mongodb/local',
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
},
// (Optional)
// Use the proxy to setup ssl or to route requests to the correct
// app when there are several apps
// proxy: {
// domains: 'mywebsite.com,www.mywebsite.com',
The error message you are receiving:
Error: All configured authentication methods failed
Means that the SSH connection is failing. So the credentials you are using (pity you removed them from the config) are not working. Try using a command line ssh using these same credentials, and then trouble shoot that - once you can ssh into the server, then mup should be able to do it's work.
You can get more information out of ssh by specifying one or more -v parameters, eg:
ssh -v -v my_user#remote.com
and it will give you information about the authentication methods it is trying as it goes through them. This will help you narrow down the problem.

Intern target QT webdriver on remote machine

I have installed intern on my local machine (192.168.1.50) and want to use the QT Browser webdriver on a remote machine (192.168.1.76). I've changed the intern.js and added the correct hostname as shown beneath:
tunnelOptions: {
hostname: '192.168.1.207:9517'
},
The qt browser is called as well:
environments: [
{ browserName: 'QTBrowser', version: '5.4' , platform: [ 'LINUX' ] }
],
Tunnel is set to NullTunnel.
When executing the tests, following error is shown
C:\intern-tutorial>intern-runner config=tests/intern.js Listening on 0.0.0.0:9000 Tunnel started Suite QTBrowser 5.4 on LINUX FAILED Error: [POST http://192.168.1.207:9517/wd/hub/session] connect ETIMEDOUT
192.168.1.207:4444 at Server.createSession at
at retry
at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
TOTAL: tested 0 platforms, 0/0 tests failed; fatal error occurred
Error: Run failed due to one or more suite errors at
emitLocalCoverage
at
finishSuite
at at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
I am able to access the remote webdriver myself via the browser using url http://192.168.1.76:9517/status
So the connection is correct, but intern does add the /wd/hub/session which actually isn't needed.
How can I get my intern from not doing this?
You can get past the 'wd/hub' issue by setting pathname on in the tunnel options:
tunnelOptions: {
pathname: '/',
hostname: '192.168.1.207',
port: 9517
}
However, there are currently a couple of incompatibilities between Intern and QtWebDriver. One is that QtWebDriver requires that headers use a specific capitalization scheme, like 'Content-Type'. However, the library Intern uses to handle its requests currently normalizes header names to lowercase. This should be fine, because headers are supposed to be case insensitive, but not everything follows the standard.
Another problem is that, unlike most other WebDriver implementations, QtWebDriver responds to a session creation call with a 303 response rather than a 200, and the redirect address is relative. While that should be fine, the version of the Leadfoot library used by Intern doesn't properly follow relative redirect addresses.
These issues should be fixed in a future version of Intern, but for the moment Intern doesn't work out-of-the-box with QtWebDriver.

Resources