using "project" with "select" in gremlin-javascript is throwing error - gremlin

I have a simple query which gives me expected result when i run it on console, but fails when i run it in aws-neptune DB using gremlin node.js driver/ gremlin-javascript.
query running successfully in console
g.V().hasLabel('item').project('id').by(id).select(values)
==>[item1]
==>[item2]
==>[item3]
I tried to ran same query in gremlin-javascript using import "gremlin.process.t"
g.V().hasLabel('item').project('id').by(gremlin.process.t.id).select(gremlin.process.t.values)
But i get following error "detailedMessage":"null:select([null])"}
error Error: Server error: {"requestId":"0521e945-04fb-4173-b4fe-0426809500fc","code":"InternalFailureException","detailedMessage":"null:select([null])"} (599)
What is the correct way to use project with select in gremlin-javascript ??

Note that values is not on T it's on Column:
gremlin> values.class
==>class org.apache.tinkerpop.gremlin.structure.Column$2
Therefore, you need to reference that enum in Javascript:
const t = gremlin.process.traversal.t
const c = gremlin.process.traversal.column
g.V().hasLabel('item').
project('id').
by(t.id).
select(c.values)
You can read about common imports for gremlin-javascript here.

Related

Airflow PostgresOperator SQL parameter syntax error

I have the following task in my DAG:
create_features_table = PostgresOperator(
task_id="create_features_table",
postgres_conn_id="featuredb",
sql="src/test.sql "
)
But when I test the task, I get this error:
psycopg2.errors.SyntaxError: syntax error at or near "src"
LINE 1: src/test.sql
The content of the test.sql script is:
CREATE TABLE test(
C1 int,
C2 int,
);
I can't point out the error in the syntax, but that's because it is my first DAG. Any help would be greatly appreciated.
If I run the script directly from the postgres container's psql using "\i src/text.sql" it works fine.
I have tested the connection from the airflow web server and the connection works.
I found that I had to put a space before closing the quotes to avoid a jinja2.exeptions.TemplateNotFound error, but haven't been able to find the syntax error.
According to documentation (https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/_api/airflow/providers/postgres/operators/postgres/index.html#airflow.providers.postgres.operators.postgres.PostgresOperator)
if you are defining your sql script path, it must ends with .sql
You have a white space in your path in the end so Operatort things it’s a query to be executed on your postgre, not file with query. You can see it in the response from postgre. Run this query on your postgre instance src/test.sql and you will get the same syntax error.
You can fix it easily by removing that white space
sql="src/test.sql"

R bigrquery - how to catch error messages from executed SQL?

Say I have some SQL code that refreshes a table of data, and I would like to schedule an R script to schedule this code to run daily. Is there a way to capture any potential error messages the SQL code may throw and save that error message to an R variable instead of the error message being displayed in the R console log?
For an example, assume I have stored procedure sp_causing_error() in BigQuery that that takes data from a source table source_table and refreshes a target table table_to_refresh.
CREATE OR REPLACE PROCEDURE sp_causing_error()
BEGIN
CREATE OR REPLACE TABLE table_to_refresh AS (
Select non_existent_column, x, y, z
From source_table
);
END;
Assume the schema of the source_table has changed and column non_existent_column no longer exists. When attempting to call sp_causing_error() in RStudio via:
library(bigrquery)
query <- "CALL sp_causing_error()"
bq_project_query(my_project, query)
We get an error message printed to the console (which masks the actual error message we would encounter if running in BigQuery):
Error in UseMethod("as_bq_table") : no applicable method for 'as_bq_table' applied to an object of class "NULL"
If we were to run sp_causing_error() in BigQuery, it throws an error message stating:
Query error: Unrecognized name: non_existent_column at [sp_throw_error:3:8]
Are query error message displayed in BigQuery ever captured anywhere in bigrquery when executing SQL? My goal would be to have some sort of try/catch block in the R script that catches an error message that can then be written to an output file if the SQL code did not run successfully. Hoping there is a way we can capture the descriptive error message from BigQuery and assign it to an R variable for further processing.
UPDATE
R's tryCatch() function comes in handy here to catch the R error message:
query <- "CALL sp_causing_error()"
result <- tryCatch(
bq_project_query("research-01-217611", query),
error = function(err) {
return(err)
}
)
result now contains the error message from the R console:
<simpleError in UseMethod("as_bq_table"): no applicable method for 'as_bq_table' applied to an object of class "NULL">
However, this is still not descriptive of the actual error message we see if we execute the same SQL code in BigQuery, quoted above which references an unrecognized column name. Are we able to catch that error message instead of the more generic R error message?
UPDATE/ANSWER
Wrapping the stored procedure call within R using BigQuery's Begin...Exception...End syntax lets us get at the actual error message. Example code snippet:
query <- '
BEGIN
CALL sp_causing_error();
EXCEPTION WHEN ERROR THEN
Select 1 AS error_flag, ##error.message AS error_message, ##error.statement_text AS error_statement_text, ##error.formatted_stack_trace AS stack_trace
;
END;
'
query_result <- bq_table_download(bq_project_query(<project>, query))
error_flag <- query_result["error_flag"][[1]]
if (error_flag == 0) {
print("Job ran successfully")
} else {
print("Job failed")
# Access error message variables here and take additional action as desired
}
Warning: Note that this solution could cause an R error if the stored procedure completes successfully, as error_flag will not exist unless explicitly passed at the end of the stored procedure. This can be worked around by adding one line at the end of your stored procedure in BigQuery to set the flag appropriately so the bq_table_download() function will get a value upon the stored procedure running successfully:
BEGIN
-- BigQuery stored procedure code
-- ...
-- ...
Select 0 AS error_flag;
END;

Reticulate AWS Cogntito

This is my Python code (that I've checked and it works):
from warrant.aws_srp import AWSSRP
def auth(USERNAME,PASSWORD):
client = boto3.client('cognito-idp',region_name=region_name)
aws = AWSSRP(username=USERNAME, password=PASSWORD, pool_id=POOL_ID,
client_id=CLIENT_ID,client=client)
try:
tokens = aws.authenticate_user()
return(tokens)
except Exception as e:
return(e)
I'm working with R in order to create a visual interface for doing some operation (including this one) and it is a requirement.
I use the reticulare R package to execute Python code. I tested it with some dummy code in order to check the correct functioning (and it is okay).
When i execute the above function by running:
reticulate::source_python(FILE_PATH)
py$auth(USERNAME,PASSWORD)
i get the following error:
An error occurred (InvalidParameterException) when calling the RespondToAuthChallenge operation: TIMESTAMP format should be EEE MMM d HH:mm:ss z yyyy in english.
I tried to search a lot but I found nothing, I suppose that can exist a sort of wrapper or formatter. Maybe someone as already face this problem...
Thank a lot of any help.

error while search against the Scopus Search API and save the results in batches into a xml file

I'm trying to extract abstract from Scopus, using rscopus and with the functions that I got from https://github.com/christopherBelter/scopusAPI
I have the API key using my university account, but when trying to save the data in xml, with:
theXML <- searchByString(string = query, outfile = "testdata.xml")
I got an error:
Error in searchByString(string = query, outfile = "testdata.xml") : Unauthorized (HTTP 401).
3. stop(http_condition(x, "error", task = task, call = call))
2. httr::stop_for_status(theURL) at scopusAPI.R#12
1. searchByString(string = query, outfile = "testdata.xml")
Does something wrong with my API key since ": Unauthorized (HTTP 401)"?
Have you tried validating your string outside of the script?
The Elsevier Developer Portal would be a good place to try out your string to confirm it works or to refine it so it works
"https://dev.elsevier.com/scopus.html"

Gremlin - TypeError: Object of type GraphTraversal is not JSON serializable

This below piece of gremlin code runs perfectly well in the gremlin console (it finds the unique start and end points of an k-step ego network, along with the minimum distance to that endpoint):
g.V(42062000).as("from")
.repeat(both().as("to")).emit().times(3).path()
.count(local).as("pathlen")
.select("from", "to", "pathlen")
.dedup("from", "to").toList()
And gives an output similar to the following, which is as expected:
==>{from=v[42062000], to=v[83607800], plen=2}
==>{from=v[42062000], to=v[23683248], plen=3}
==>{from=v[42062000], to=v[41762840], plen=3}
==>{from=v[42062000], to=v[42062000], plen=3}
==>{from=v[42062000], to=v[83599456], plen=3}
However, when converting the code to conform to the gremlinpython wrapper
(i.e. after substituting as for as_), I'm given the error TypeError: Object of type GraphTraversal is not JSON serializable, even though it's the same query.
Has anyone faced similar issues?
I am using gremlinpython 3.4.2, but was originally using 3.3.3. My version of Python is 3.7.3.
import static classes using
from gremlin_python import statics
statics.load_statics(globals())
or
from gremlin_python.process.graph_traversal import elementMap, range_,
local, count
and common reserved words must end with _
Example:
as_, range_

Resources