Error when Creating New Item in datasource with relationship - google-app-maker

I have a ONE to MANY relationship setup between two datasources (longer explanation is here).
I am intermittently getting errors when trying to create or edit an item in the non-owner datasource.
Thu Oct 19 11:59:20 GMT-700 2017
Drive Table internal error. Please try again. Caused by: Execution Failed. More information: Internal error encountered.. (HTTP status code: unknown) Error: Drive Table internal error. Please try again.
Thu Oct 19 11:59:20 GMT-700 2017
Creating new record: (Error) : Drive Table internal error. Please try again.
at PORequests.Panel13.Panel18.Add_Item.onClick:1:19
Thu Oct 19 11:59:20 GMT-700 2017
Creating new record failed.
When I looked at the server logs I see:
{
insertId: "13ovnchfxwxsqq"
labels: {
script.googleapis.com/process_id: "EAEA1GOwsdCh8YjD5KVVeBwT21xln0G3WFk4ULblcC0zX8_PML2UzR0o6WSPEhKBCgXa9SY_g6dxTqLJKMqQXk0UrIJW6en9rymo8OEBcw0X4tv4pLZeSxFNgRbInWHBJiegZM4WyjaPBqw1q"
script.googleapis.com/project_key: "MBTigw5qnbrHScnjNatBP5Rs3AwaeoTZn"
script.googleapis.com/user_key: "AEiMU4fv5flw4TblvkAxoOl6zKB5363j63bCyxicf/bJJXwtBFLZYGEBRLOkH+Z68eLnXiHPnR98"
}
logName: "projects/project-id-8108427532903699662/logs/script.googleapis.com%2Fconsole_logs"
receiveTimestamp: "2017-10-19T18:59:21.737166790Z"
resource: {
labels: {
function_name: "__appmakerGlobalScriptWrapper"
invocation_type: "web app"
project_id: "project-id-8108427532903699662"
}
type: "app_script_function"
}
severity: "ERROR"
textPayload: "Drive Table internal error. Please try again.
Caused by: Execution Failed. More information: Internal error encountered.. (HTTP status code: unknown)
Error: Drive Table internal error. Please try again. "
timestamp: "2017-10-19T18:59:20.729Z"
}
Anyone else seen this issue? Is it a bug or am I doing something wrong?

Related

Empty S3 remote log files in Airflow 2.3.2

I configured remote S3 logging with the following variables:
- name: AIRFLOW__LOGGING__REMOTE_LOGGING
value: 'True'
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: 's3://my-airflow/airflow/logs'
- name: AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID
value: 'my_s3'
- name: AIRFLOW__LOGGING__LOGGING_LEVEL
value: 'ERROR'
- name: AIRFLOW__LOGGING__ENCRYPT_S3_LOGS
value: 'False'
So far the log files are created with the DAG and task path with the name attempt=1.log or similar but always with 0 bytes size (empty). When I try to see the logs from Airflow I get this message (I'm using the KubernetesExecutor):
*** Falling back to local log
*** Trying to get logs (last 100 lines) from worker pod ***
*** Unable to fetch logs from worker pod ***
(400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'f3e0dd67-c8f4-42fc-945f-95dc42e8c2b5', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Mon, 01 Aug 2022 13:07:07 GMT', 'Content-Length': '136'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"name must be provided","reason":"BadRequest","code":400}\n'
Why are my logs files empty?

Connecting to Snowflake using Okta from R Workbench

Hi, I am trying to connect to snowflake from R Workbench. This is the error received while connecting with okta.
con <- dbConnect(jdbcDriver, "jdbc:snowflake://company.snowflakecomputing.com/?authenticator=https://company.okta.com/", 'name#company.com', 'pass')
Sep 16, 2021 10:07:42 PM net.snowflake.client.core.SessionUtil handleFederatedFlowError
SEVERE: IOException when authenticating with https://company.okta.com/
java.net.MalformedURLException: no protocol: /login/cert
at java.net.URL.(URL.java:611)
at java.net.URL.(URL.java:508)
at java.net.URL.(URL.java:457)
at net.snowflake.client.core.SessionUtil.isPrefixEqual(SessionUtil.java:1218)
at net.snowflake.client.core.SessionUtil.federatedFlowStep4(SessionUtil.java:999)
at net.snowflake.client.core.SessionUtil.getSamlResponseUsingOkta(SessionUtil.java:1206)
at net.snowflake.client.core.SessionUtil.newSession(SessionUtil.java:378)
at net.snowflake.client.core.SessionUtil.openSession(SessionUtil.java:284)
at net.snowflake.client.core.SFSession.open(SFSession.java:446)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initialize(DefaultSFConnectionHandler.java:104)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initializeConnection(DefaultSFConnectionHandler.java:79)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.initConnectionWithImpl(SnowflakeConnectionV1.java:116)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.(SnowflakeConnectionV1.java:96)
at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:164)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
Sep 16, 2021 10:07:43 PM net.snowflake.client.core.SessionUtil handleFederatedFlowError
SEVERE: IOException when authenticating with https://company.okta.com/
java.net.MalformedURLException: no protocol: /login/cert
at java.net.URL.(URL.java:611)
at java.net.URL.(URL.java:508)
at java.net.URL.(URL.java:457)
at net.snowflake.client.core.SessionUtil.isPrefixEqual(SessionUtil.java:1218)
at net.snowflake.client.core.SessionUtil.federatedFlowStep4(SessionUtil.java:999)
at net.snowflake.client.core.SessionUtil.getSamlResponseUsingOkta(SessionUtil.java:1206)
at net.snowflake.client.core.SessionUtil.newSession(SessionUtil.java:378)
at net.snowflake.client.core.SessionUtil.openSession(SessionUtil.java:284)
at net.snowflake.client.core.SFSession.open(SFSession.java:446)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initialize(DefaultSFConnectionHandler.java:104)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initializeConnection(DefaultSFConnectionHandler.java:79)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.initConnectionWithImpl(SnowflakeConnectionV1.java:116)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.(SnowflakeConnectionV1.java:96)
at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:164)
Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
net.snowflake.client.jdbc.SnowflakeSQLException: JDBC driver encountered communication error. Message: Exception encountered when opening connection: no protocol: /login/cert.
I presume that Okta is set up with MFA. If so, the error is because of that since Snowflake drivers does not support Native authentication for Okta with MFA enabled.
You need to use externalbrowser as the authenticator option if the requirement is to use Okta + MFA.

How to connect to Neptune using Version 4 Signing dependency

I have an EC2 instance that can connect to gremlin using the Gremlin Console, or by pulling in this repository and running the maven command.
However, when I use the recommended Version 4 signing dependency:
dependencies {
compile(
...
// neptune sigv4
[group: "com.amazonaws", name:"aws-java-sdk-core", version: "1.11.307"],
[group: "com.amazonaws", name:"amazon-neptune-sigv4-signer", version: "1.0"],
[group: "com.amazonaws", name:"amazon-neptune-gremlin-java-sigv4", version: "1.0"],
...
)
}
On a very similar hello world program:
package com.test.neptune;
import org.apache.tinkerpop.gremlin.driver.Client;
import org.apache.tinkerpop.gremlin.driver.Cluster;
import org.apache.tinkerpop.gremlin.driver.Result;
import org.apache.tinkerpop.gremlin.driver.ResultSet;
import org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer;
import org.neo4j.cypher.internal.frontend.v2_3.repeat;
public class NeptuneExampleCopy {
private static final String NEPTUNE_ENDPOINT = "my.endpoint.url";
private static final int NEPTUNE_PORT = 0;
public static void main(String[] args) {
// connect to the neptune cluster
final Cluster cluster = Cluster.build()
.addContactPoint(NEPTUNE_ENDPOINT)
.port(NEPTUNE_PORT)
.channelizer(SigV4WebSocketChannelizer.class)
.create();
// run a traversal, print the results
final Client client = cluster.connect();
final ResultSet rs = client.submit("g.V().count()");
for (Result r : rs) {
System.out.println(r);
}
// close the cluster
cluster.close();
}
}
Gradle throws the following exception:
Apr 25, 2019 5:24:21 PM io.netty.channel.ChannelInitializer exceptionCaught
WARNING: Failed to initialize a channel. Closing: [id: 0xd894eb28]
com.amazon.neptune.gremlin.driver.exception.SigV4PropertiesNotFoundException: Unable to load SigV4 properties from any of the providers
at com.amazon.neptune.gremlin.driver.sigv4.ChainedSigV4PropertiesProvider.getSigV4Properties(ChainedSigV4PropertiesProvider.java:74)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.loadProperties(AwsSigV4ClientHandshaker.java:102)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.<init>(AwsSigV4ClientHandshaker.java:64)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.createHandler(SigV4WebSocketChannelizer.java:210)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.configure(SigV4WebSocketChannelizer.java:176)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:140)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:92)
at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113)
at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:105)
at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:617)
at io.netty.channel.DefaultChannelPipeline.access$000(DefaultChannelPipeline.java:46)
at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1467)
at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1141)
at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:666)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:510)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
at org.apache.tinkerpop.gremlin.driver.Client.submit(Client.java:214)
at org.apache.tinkerpop.gremlin.driver.Client.submit(Client.java:198)
at com.test.neptune.NeptuneExampleCopy.main(NeptuneExampleCopy.java:25)
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
at org.apache.tinkerpop.gremlin.driver.Client.submitAsync(Client.java:310)
at org.apache.tinkerpop.gremlin.driver.Client.submitAsync(Client.java:242)
at org.apache.tinkerpop.gremlin.driver.Client.submit(Client.java:212)
... 2 more
Caused by: java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
at org.apache.tinkerpop.gremlin.driver.Client$ClusteredClient.chooseConnection(Client.java:499)
at org.apache.tinkerpop.gremlin.driver.Client.submitAsync(Client.java:305)
... 4 more
Apr 25, 2019 5:24:22 PM io.netty.channel.ChannelInitializer exceptionCaught
WARNING: Failed to initialize a channel. Closing: [id: 0xc3ff34e0]
com.amazon.neptune.gremlin.driver.exception.SigV4PropertiesNotFoundException: Unable to load SigV4 properties from any of the providers
at com.amazon.neptune.gremlin.driver.sigv4.ChainedSigV4PropertiesProvider.getSigV4Properties(ChainedSigV4PropertiesProvider.java:74)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.loadProperties(AwsSigV4ClientHandshaker.java:102)
at com.amazon.neptune.gremlin.driver.sigv4.AwsSigV4ClientHandshaker.<init>(AwsSigV4ClientHandshaker.java:64)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.createHandler(SigV4WebSocketChannelizer.java:210)
at org.apache.tinkerpop.gremlin.driver.SigV4WebSocketChannelizer.configure(SigV4WebSocketChannelizer.java:176)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:140)
at org.apache.tinkerpop.gremlin.driver.Channelizer$AbstractChannelizer.initChannel(Channelizer.java:92)
at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113)
How could this code be fixed? Is there a better Version 4 signing dependency?
SigV4 handler tries to fetch your AWS Credentials through multiple credential providers. If no credential provider was initialized, then you are bound to see this exception. How have you initialized your AWS Credentials? You could use any of the standard sources, like environment variables, or JVM system properties or the like. See the documentation below for more details:
https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-connecting-gremlin-java.html
Update: Do make sure you are using the latest versions of all the packages and dependencies.
For example:
// neptune sigv4 [group: "com.amazonaws",
name:"aws-java-sdk-core", version: "1.11.542"],
[group: "com.amazonaws",
name:"amazon-neptune-sigv4-signer", version: "1.0.4"],
[group: "com.amazonaws",
name:"amazon-neptune-gremlin-java-sigv4", version: "1.0.5"],
// for neptune [group: "org.apache.tinkerpop",
name: "gremlin-driver", version: "3.4.1"]

webdriver io crashes inside a jade template on rendered function

I am testing my meteor app's UI with some browser tests. I use http://webdriver.io and a selenium chrome https://hub.docker.com/r/selenium/standalone-chrome/ node.
I use the webdriver.io testrunner for tests and mocha as the test framework.
When I enter this block inside a jade template (by opening the corresponding page):
Template.boardBody.onRendered(function() {
let imagePath = new ReactiveVar('');
this.autorun(() => {
imagePath.set(Meteor.settings.public.backgroundPath[1]);
//document.getElementsByClassName('board-wrapper')[0].style.backgroundImage = "url('" + imagePath.get() + "')";
$('.board-wrapper').css('background-image', "url('" + path + "')");
});
}
The headless chrome crashes with this error:
{ Error: An unknown server-side error occurred while processing the command.
at BoardPage.open (tests/board.page.js:20:5)
at Context.<anonymous> (tests/board.test.js:22:17)
at Promise.F (node_modules/core-js/library/modules/_export.js:35:28)
at execute(<Function>) - at BoardPage.open (tests/page.js:11:13)
message: 'unknown error: session deleted because of page crash\nfrom tab crashed',
type: 'RuntimeError',
screenshot: 'Just a black page',
seleniumStack:
{ status: 13,
type: 'UnknownError',
message: 'An unknown server-side error occurred while processing the command.',
orgStatusMessage: 'unknown error: session deleted because of page crash\nfrom tab crashed\n (Session info: chrome=59.0.3071.115)\n (Driver info: chromedriver=2.30.477691 (6ee44a7247c639c0703f291d320bdf05c1531b57),platform=Linux 4.4.4-200.fc22.x86_64 x86_64) (WARNING: The server did not provide any stacktrace information)\nCommand duration or timeout: 4.13 seconds\nBuild info: version: \'3.4.0\', revision: \'unknown\', time: \'unknown\'\nSystem info: host: \'f362d8ab8951\', ip: \'172.17.0.1\', os.name: \'Linux\', os.arch: \'amd64\', os.version: \'4.4.4-200.fc22.x86_64\', java.version: \'1.8.0_131\'\nDriver info: org.openqa.selenium.chrome.ChromeDriver\nCapabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.30.477691 (6ee44a7247c639c0703f291d320bdf05c1531b57), userDataDir=/tmp/.org.chromium.Chromium.NUsUeZ}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=59.0.3071.115, platform=LINUX, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, unexpectedAlertBehaviour=}]\nSession ID: f1e261ec57fde3697e98945af051d236' },
shotTaken: true }
I use chai.expect for my assertion statements and i have a feeling that the promises are somehow messing up the headless chrome.
Anyone knows why this is happening?

Unable to connect to dynamoDB table - UnknownEndpoint: Inaccessible host:

I'm new to dynamoDB. I have created a table and am trying to insert data into the table. It works well when I connect from my home internet. But when I try from my office network, I get the below error:
I suspect this is due to proxy issues. Can you please help me resolve this issue? Thank you.
[UnknownEndpoint: Inaccessible host: dynamodb.ap-southeast-2.amazonaws.com'. This service may not be available in theap-southeast-2' region.]
message: 'Inaccessible host: dynamodb.ap-southeast-2.amazonaws.com\'. This service may not be available in theap-southeast-2\' region.',
code: 'UnknownEndpoint',
region: 'ap-southeast-2',
hostname: 'dynamodb.ap-southeast-2.amazonaws.com',
retryable: true,
originalError:
{ [NetworkingError: getaddrinfo ENOTFOUND dynamodb.ap-southeast-2.amazonaws.com dynamodb.ap-southeast-2.amazonaws.com:443]
message: 'getaddrinfo ENOTFOUND dynamodb.ap-southeast-2.amazonaws.com dynamodb.ap-southeast-2.amazonaws.com:443',
code: 'NetworkingError',
errno: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'dynamodb.ap-southeast-2.amazonaws.com',
host: 'dynamodb.ap-southeast-2.amazonaws.com',
port: 443,
region: 'ap-southeast-2',
retryable: true,
time: Mon Sep 21 2015 11:19:58 GMT+1000 (AUS Eastern Standard Time) },
time: Mon Sep 21 2015 11:19:58 GMT+1000 (AUS Eastern Standard Time) }
Thank you for the pointers. I managed to solve the issue using below code snipped.
var proxy = require('proxy-agent');
AWS.config.update({
httpOptions: {
agent: proxy('http://{user_name}:{password}#<proxy>:<port>')
}
});
This is documented in amazon's aws-sdk configuration site: http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-configuring.html

Resources