ERROR: Attribute '' cannot be parsed: Cannot read property 'dataType' of undefined - sequelize-cli

I was creating document_types table using following cli command
sequelize model:create --name User --attributes name:string, username:string, email:string, password:string

Solution: remove the space after comma between different attributes to avoid the error, correct command would be:
sequelize model:create --name User --attributes name:string,username:string,email:string,password:string

Related

S3CopyObjectOperator - An error occurred (NoSuchKey) when calling the CopyObject operation: The specified key does not exist

If I run this locally in the cli it runs successfully and copies the files from another bucket/key to mine into the correct location.
aws s3 sync s3://client_export/ref/commissions/snapshot_date=2022-01-01/ s3://bi-dev/KSM/refinery29/commissions/snapshot_date=2022-01-01/
When I try with the S3CopyObjectOperator I see the NoSuchKey error:
copy_commissions_data = S3CopyObjectOperator(
task_id='copy_commissions_data',
aws_conn_id='aws_default',
source_bucket_name='client_export',
dest_bucket_name='bi-dev',
source_bucket_key='ref/commissions/snapshot_date=2022-01-01,
dest_bucket_key='KSM/refix/commissions/snapshot_date=2022-01-01',
dag=dag
)
I've also tried adding a / before and after the key names and both but I get the same error
You are missing quote in the end of line 6, should be:
source_bucket_key='ref/commissions/snapshot_date=2022-01-01',

Using Beeline as an example (vs hive cli)?

I have a sqoop job ran via oozie coordinator. After a major upgrade we can no longer use hive cli and were told to use beeline. I'm not sure how to do this? Here is the current process:
I have a hive file: hive_ddl.hql
use schema_name;
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.max.dynamic.partitions=100000;
SET hive.exec.max.dynamic.partitions.pernode=100000;
SET mapreduce.map.memory.mb=16384;
SET mapreduce.map.java.opts=-Xmx16G;
SET hive.exec.compress.output=true;
SET mapreduce.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
drop table if exists 'table_name_stg' purge;
create external table if not exists 'table_name_stg'
(
col1 string,
col2 string,
...
)
row format delimited
fields terminated by '\001'
stored as textfile
location 'my/location/table_name_stg';
drop table if exists 'table_name' purge;
create table if not exists 'table_name'
stored as parquet
tblproperties('parquet.compress'='snappy') as
select * from schema.tablename_stg
drop table if exists 'table_name_stg' purge;
This is pretty straight forward, make a stage table, then use that to make the final table stuff...
it's then called in a .sh file as such:
hive cli -f $HOME/my/path/hive_ddl.hql
I'm new to most of this and not sure what beeline is, and couldn't find any examples of how to use it to accomplish the same thing my hivecli is. I'm hoping it's as simple as calling the hive_ddl.hql file differently, versus having to rewrite everything.
Any help is greatly appreciated.
Beeline is a command line shell supported in hive. In your case you can replace hive cli with a beeline command in the same .sh file. Would look roughly like the one given below.
beeline -u hiveJDBCUrl and -f test.hql
You can explore more about the beeline command options by going to the below link
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Beeline%E2%80%93CommandLineShell

interact with the sample CorDappVia the interactive shell

When i checked the output :run vaultQuery contractStateType: com.example.state.IOUState
it show :
Could not parse as a command: Cannot construct instance of java.lang.Class, problem: com.example.state.IOUState
at [Source: UNKNOWN; line: -1, column: -1]
I typed net.corda.samples.example.states.IOUState and it worked.
You see the package name in ExampleFlow.kt.
Nice #shok1122.
Paste & run: run vaultTrack contractStateType: net.corda.samples.example.states.IOUState

using "project" with "select" in gremlin-javascript is throwing error

I have a simple query which gives me expected result when i run it on console, but fails when i run it in aws-neptune DB using gremlin node.js driver/ gremlin-javascript.
query running successfully in console
g.V().hasLabel('item').project('id').by(id).select(values)
==>[item1]
==>[item2]
==>[item3]
I tried to ran same query in gremlin-javascript using import "gremlin.process.t"
g.V().hasLabel('item').project('id').by(gremlin.process.t.id).select(gremlin.process.t.values)
But i get following error "detailedMessage":"null:select([null])"}
error Error: Server error: {"requestId":"0521e945-04fb-4173-b4fe-0426809500fc","code":"InternalFailureException","detailedMessage":"null:select([null])"} (599)
What is the correct way to use project with select in gremlin-javascript ??
Note that values is not on T it's on Column:
gremlin> values.class
==>class org.apache.tinkerpop.gremlin.structure.Column$2
Therefore, you need to reference that enum in Javascript:
const t = gremlin.process.traversal.t
const c = gremlin.process.traversal.column
g.V().hasLabel('item').
project('id').
by(t.id).
select(c.values)
You can read about common imports for gremlin-javascript here.

sybase Select query in unix

//bin/isql -b -SF2_PR_LV_SL -UAA -bb -DFH2 -w200 -Jroman9 -b
fre.sh[4]: //bin/isql: not found.
SET NOCOUNT ON
fre.sh[5]: SET: not found.
GO
fre.sh[6]: GO: not found.
date +%Y%m%d
dat=20180801
uSE FLASH2
fre.sh[8]: uSE: not found.
GO
fre.sh[9]: GO: not found.
CREATE TABLE
fre.sh[10]: CREATE: not found.
fre.sh[11]: Syntax error at line 12 : `(' is not expected.
someone explain me , How this can be solved?

Resources