I m looking to get logstash input from kinesis but getting dynamodb errors
https://github.com/logstash-plugins/logstash-input-kinesis#authentication
auth is defined in ~/.aws/credentials and user has full dynamodb access
logstash version 7.16.2 running inside docker container
logstash | at java.lang.Thread.run(Thread.java:829) [?:?]
logstash | Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: User: arn:aws:sts::xxx:assumed-role/AmazonSSMRoleForInstancesQuickSetup/i-01xxxxxxx is not authorized to perform: dynamodb:DescribeTable on resource: arn:aws:dynamodb:us-west-1:xxxxxxxxxx:table/logstash-kinesis because no identity-based policy allows the dynamodb:DescribeTable action (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: A814QENJD5FKI67BFCMQN9J7B7VV4KQNSO5AEMVJF66Q9ASUAAJG; Proxy: null)
logstash | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819) ~[aws-java-sdk-core-1.11.1034.jar:?]
===================================================
input {
kinesis {
kinesis_stream_name => "logstash-kinesis"
application_name => "logstash-kinesis"
region => "us-west-2"
codec => cloudwatch_logs
}
}
I was able to resolve after adding role_arn input section as it was using iam AmazonSSMRoleForInstancesQuickSetup attached to instance
role_arn => "arn:aws:iam::xxx"
also update this role trust relationship
"Principal": {
"Service": "kinesis.amazonaws.com",
"AWS":
"arn:aws:iam::xxxx:role/AmazonSSMRoleForInstancesQuickSetup"
}
Related
I deployed a simple NFT smart contract on polygon mumbai testnet but when I am trying to verify it then It is showing an error. please guide me how to verify it...
This is the error which I am getting
PS C:\Users\Sumits\Desktop\truffle> truffle run verify MyNFT --network matic --debug
DEBUG logging is turned ON
Running truffle-plugin-verify v0.5.20
Retrieving network's chain ID
Verifying MyNFT
Reading artifact file at C:\Users\Sumits\Desktop\truffle\build\contracts\MyNFT.json
Failed to verify 1 contract(s): MyNFT
PS C:\Users\Sumits\Desktop\truffle>
This is my truffle-config.js
const HDWalletProvider = require('#truffle/hdwallet-provider');
const fs = require('fs');
const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
matic: {
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.maticvigil.com`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
version: "^0.8.0",
}
},
plugins: ['truffle-plugin-verify'],
api_keys: {
polygonscan: 'BTWY55K812M*******WM9NAAQP1H3'
}
}
First deploy the contract:
truffle migrate --network matic --reset
I am not sure if you successfully deploy it to matic network, because your configuration does not seem to be correct:
matic: {
// make sure you set up provider correct
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.maticvigil.com/v1/YOURPROJECTID`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
Then verify.
truffle run verify ContractName --network matic
ContractName should be the name of the contract, not the name of the file
please make sure you are putting the polygonscan api key in lowercase
I've implemented the Amplify JS Library with a Vue project and have had success with all of the features of the library except this issue. When I query a model with Elasticsearch, it returns the appropriate results, but also the error of "ResolverExecutionLimitReached".
This is the request:
let destinations = await API.graphql(graphqlOperation(queries.searchDestinations, {filter: { deviceId: { eq: params.id }}}))
This is the schema:
type Destination
#model
#searchable
#auth(rules: [{ allow: public }, { allow: private }])
#key(name: "byXpoint", fields: ["xpoint"])
#key(name: "byDevice", fields: ["deviceId"])
{
id: ID!
index: Int!
levels: [String]
name: String!
xpoint: String
sourceId: ID
Source: Source #connection
lock: Boolean
breakaway: Boolean
breakaways: String
probeId: ID!
probe: Probe #connection(fields: ["probeId"])
deviceId: ID!
device: Device #connection(fields: ["deviceId"])
orgId: ID!
org: Org #connection(fields: ["orgId"])
}
And this returns:
{
data: {
searchDestinations: {items: Array(100), nextToken: "ba1dc119-2266-4567-9b83-f7eee4961e63", total: 384}
},
errors: [
{
data: null
errorInfo: null
errorType: "ResolverExecutionLimitReached"
locations: []
message: "Resolver invocation limit reached."
path: []
}
]
}
My understanding is the AppSync API has a hard limit of returning more than 1000 entries, but this query is on a table with only ~600 entries and is only returning 384. I am executing the same command via AppSync directly via a NodeJS application and it works without issue.
Not sure where to investigate further to determine what is triggering this error. Any help or direction is greatly appreciated.
Connections in the schema were causing the single request to go beyond the 1000 request limit (exactly as stated by Mickers in the comments). Updated schema with less connections on fetch and issue was resolved.
I have a .NET Core Worker service using AWS SQS to read messages off a queue. For local development I'm using a default profile with access/secret key stored in that. My appSettings.json is set up as follows:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
},
"TargetCloudWatchGroup": "/aws/insite/workers"
},
"App": {
"TaskProcessDelay": 10000,
"Environment": "NA",
"WorkerType": "INCOMING"
},
"AWS": {
"Region": "ap-southeast-2",
"Profile": "default",
"ProfilesLocation": "C:\\Users\\JMatson\\.aws\\credentials",
"AwsQueueLongPollTimeSeconds": 5,
"QueueUrl": "https://sqs.ap-southeast-2.amazonaws.com/712510509017/insite-incoming-dev"
}
}
I'm using DI to set up the services:
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
//var options = hostContext.Configuration.GetAWSOptions();
services.AddDefaultAWSOptions(hostContext.Configuration.GetAWSOptions());
services.AddHostedService<Worker>();
services.AddSingleton<ILogger, Logger>(); // Using my own basic wrapper around NLog for the moment, pumped to CloudWatch.
services.AddAWSService<IAmazonSQS>();
});
But when I run the program in debug, it fails to read a message off the queue within Worker.cs with the following error:
An exception of type 'Amazon.Runtime.AmazonServiceException' occurred in System.Private.CoreLib.dll but was not handled in user code: 'Unable to get IAM security credentials from EC2 Instance Metadata Service.'
On startup it seems like after a couple of tries along the credentials chain it finds my credentials?
[40m[32minfo[39m[22m[49m: AWSSDK[0]
Failed to find AWS credentials for the profile default
AWSSDK: Information: Failed to find AWS credentials for the profile default
[40m[32minfo[39m[22m[49m: AWSSDK[0]
Found credentials using the AWS SDK's default credential search
AWSSDK: Information: Found credentials using the AWS SDK's default credential search
So why is it failing? If I check the immediate window I can see it's picking up my setttings:
?hostContext.Configuration.GetAWSOptions().Profile
"default"
?hostContext.Configuration.GetAWSOptions().ProfilesLocation
"C:\\Users\\JMatson\\.aws\\credentials"
when I called watson language translator service over a public network its responded with no error. meanwhile, its not able to get response body over my private network
I am using the NGINX has my load balancer and have configure a proxy_http for it on the configuration.
The error is
{ Error: Response not received. Body of error is HTTP ClientRequest object
at formatError (root\node_modules\ibm-cloud-sdk-core\lib\requestwrapper.js:115:17)
at D:\Rafiki Project\production build\Rafiki Production Files 1\ecobot-orchestrator-master_23_9-orch_persistency_fixes\node_modules\ibm-cloud-sdk-core\lib\requestwrapper.js:265:19
at process._tickCallback (internal/process/next_tick.js:68:7)
var languageTranslator = new LanguageTranslatorV2({
username:'8******************',
password:'*************',
url: 'https://gateway.watsonplatform.net/language-translator/api/',
version: '2017-05-26'
});
function translateToWSPLan(req, res, callback){
console.log("the request for translation is::");
console.log(JSON.stringify(req));
console.log("======================");
languageTranslator.identify(req.body.identifyParams, function(err, data){
if(err){
console.log('=================error==========');
console.log(err);
console.log('=================================');
var errorLog = {name: err.name, message: err.message};
callback(errorLog);
}else {
}
})
See this issue raised on the Node.js SDK for Watson - https://github.com/watson-developer-cloud/node-sdk/issues/900#issuecomment-509257669
To enable proxy routing, add proxy settings to the constructor
var languageTranslator = new LanguageTranslatorV2({
username:'8******************',
password:'*************',
url: 'https://gateway.watsonplatform.net/language-translator/api/',
version: '2017-05-26',
// other params...
proxy: '<some proxy config>',
httpsAgent: '<or some https agent config>'
});
If you take a look at the issue, then there is a problem with accessing IAM tokens which does not work when there is a proxy, but as you appear to be using a userid / password combination, you should be OK. That is until cloud boundary style credentials are suspended and superseded by IAM credentials for all existing Watson services.
I have a script that connects to CosmosDB to make some operations, am using CosmosDB as graphDB, however, am using a node module called gremlin-secure which connects to cosmosDB through web sockets, however, recently, I could not connect to the Database as below error
events.js:160
throw er; // Unhandled 'error' event
^
Error: unexpected server response (200)
at ClientRequest._req.on (/Users/abshahin/dev/azure-cosmos-db-graph-nodejs-getting-started/node_modules/ws/lib/WebSocket.js:656:26)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:473:21)
at HTTPParser.parserOnHeadersComplete (_http_common.js:99:23)
at TLSSocket.socketOnData (_http_client.js:362:20)
at emitOne (events.js:96:13)
at TLSSocket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at TLSSocket.Readable.push (_stream_readable.js:134:10)
my code looks like this
"use strict";
var Gremlin = require('gremlin-secure');
var config = require("./config");
const client = Gremlin.createClient(
443,
config.endpoint,
{
"session": false,
"ssl": true,
"user": `/dbs/${config.database}/colls/${config.collection}`,
"password": config.primaryKey
});
client.execute("g.addV('employee').property('id', 'abshahin')", { }, (err, results) => {
if (err) return console.error(err);
console.log(JSON.stringify(results));
});
and this is my config
var config = {}
config.endpoint = "xxxxxxxx.graphs.azure.com";
config.primaryKey = "super secret key";
config.database = "dbname"
config.collection = "collectionName"
module.exports = config;
I contacted Microsoft and they advised to post here, any help.
Check to make sure that the url of the db looks like xxx.graphs.azure.com the ulr displayed in the azure portal was not correct in my case.
This looks a bit similar to a problem I faced recently. Make sure you have latest OpenSSL version
openssl version -a
Azure CosmosDB enforces SSL/TLS 1.2 which is not supported by older versions of OpenSSL