Can Cloudera Impala work with Group Separator delimiter ? - cloudera

We have files with Group Separator (29) as the delimiter. I am trying to create a table (using the create table from file option) in the Cloudera metastore to be used by Impala but it does not seem to recognize the delimiter. Options i have used include:
"\029", "29", "029"
Is this even possible or the Cloudera metastore does not recognize such characters?

Ok so with some professional help i was able to solve it and the way to solve is is to use the octal value without the quotes(even though the tool tells you to use quotes).. using \035 solved the problem. –

Related

How do I rename a database in influxdb?

Looking through the influxdb website, documentation doesn't seem to
cover some functionality that typical db commands would. Does
anyone know how to move to a new name or outright rename a database inside of
influxdb?
Does anyone have a recommendation which R package to use with
influxdb?
Is there a way to load influx with -precision rfc3339 (human
readable timestamping) automatically set within the configuration?
Currently renaming databases is not supported in influxDB. Discussion is going on in influxDB's github repo on the feature's need and complexity.
https://github.com/influxdata/influxdb/issues/4154
The feature was implemented and then reverted due to bugs
Renaming is only possible by copying data into a new database:
SELECT * INTO "copy_NOAA_water_database"."autogen".:MEASUREMENT FROM "NOAA_water_database"."autogen"./.*/ GROUP BY *
Documentation: https://archive.docs.influxdata.com/influxdb/v1.2//query_language/data_exploration/#example-1-rename-a-database
The GROUP BY * clause is important as it preserves the tags.
I have not tried it for myself. But there are some answers in this link https://github.com/influxdata/influxdb.com/issues/384
ALTER DATABASE <db> RENAME TO <new_db>

While doing a migrate how is the description in the schema_version table created

I am just getting into Flyway for our SQL Server database. So far, everything is straight forward.
However, I can't seem to find a setting for the migrate command that will put a useful comment in the schema_version.description column. Granted, all I have done is a few minor migrations just to test out, but I can't seem to find any setting to create this. Is there one? Is it pulled from the comments of a script?
The migration description is taken from the name of your migration file. eg
V1__this_will_be_your_description.sql
The resulting description will have the version prefix, version and file suffix trimmed as well as underscores replace with spaces. So the resulting description for the above would be
this will be your description
See MigrationInfoHelper if you are interested in the details.
To the best of my knowledge there is no other way to influence the description.
See the documentation.

Small custom build of SQLite3

We are using sqlite3 in an application and we really need a super small build of sqlite3 by removing unnecessary functions. We are already using -Os flag.
Our application only uses a single table with couple of indexes and simple select, update, insert, delete queries. All the columns are either integer, text or blob.
I tried to generate custom build of sqlite3.c from canonical source by using various SQLITE_OMIT_* flags below but seems to have only marginal impact on the binary size.
Any suggestions on other OMIT options. Also if any of the OMIT options has any side effects for the limited use I mentioned above.
-DSQLITE_OMIT_ALTERTABLE
-DSQLITE_OMIT_ANALYZE
-DSQLITE_OMIT_ATTACH
-DSQLITE_OMIT_AUTHORIZATION
-DSQLITE_OMIT_BUILTIN_TEST
-DSQLITE_OMIT_CAST
-DSQLITE_OMIT_CHECK
-DSQLITE_OMIT_COMPILEOPTION_DIAGS
-DSQLITE_OMIT_COMPLETE
-DSQLITE_OMIT_COMPOUND_SELECT
-DSQLITE_OMIT_CTE
-DSQLITE_OMIT_DATETIME_FUNCS
-DSQLITE_OMIT_DECLTYPE
-DSQLITE_OMIT_DEPRECATED
-DSQLITE_OMIT_EXPLAIN
-DSQLITE_OMIT_FLAG_PRAGMAS
-DSQLITE_OMIT_FLOATING_POINT
-DSQLITE_OMIT_FOREIGN_KEY
-DSQLITE_OMIT_UTF16
You can use all the SQLITE_OMIT_xxx options because you did not mention using any of those features.
Don't use SQLITE_OMIT_WSD, which does not actually remove code and would make the library bigger.
Using SQLITE_OMIT_AUTOINIT would not make sense.
If you forgot to mention that the data is stored on disk, you should avoid SQLITE_OMIT_DISKIO.

Convert PL/SQL to Hive QL

I want a tool through which I can get the respective hive query by giving the PL/SQL query. There are lots of tools available which convert sql to hql. ie: taod for cloude database. But it does not show me the respective hive query.
Is there any such kind of tool whose convert given sql to hql. Please help me.
Thanks and Regards,
Ratan
Please take a look at open-source project PL/HQL at http://www.hplsql.org/ which is now a part of Hive 2.x or higher version. It allows you to run existing SQL Server, Oracle, Teradata, MySQL etc. stored procedures in Hive.
Ratan, I did not how to start responding. So, lets start like this. I think you checked toad and thinking like there is a tool to convert SQL to hive QL. I do not think there is such a tool.
Let me clarify like this, HIVE QL, is same as SQL. Check this links before you are trying to write some queries:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual,
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF.
This is simple to understand if you know sql and simple to write (as you check the HIve ql).
Hive doesnot have many operators the sql supports. For example:
select * from sales where country like 'EU~%'; "HIVE SUPPORTS LIKE"
But try this negative queries as we write in SQL :
select * from sales where country not like 'EU~%'; "HIVE DOES NOT SUPPORT"
This is just one example, I remember. There are more like this. But to deal with these hive has many like "where not" etc.
If your question is does the Hive have any PL/SQL support. Straight answer is no. But, we can check the UDF in hive and also, the PIG on Hadoop.

How does one use SQLite in Perl 6?

I want to start dabbling in Perl 6. A large percentage of my programming involves SQLite databases. It looks like work has been put into using SQLite in Perl 6, but most of the info I can find is old and vague.
I see a "perl6-sqlite" module here, but it's marked as [old] and has very little to it. I've also seen references to a new DBI based on something to do with Java, but most of that talk is from last year and it's unclear whether there's something that works.
So is there currently an accepted way to use SQLite within Perl 6?
(Updated 2015-01): DBIish from https://github.com/perl6/DBIish/ has decent support for SQLite and PostgreSQL, and limited support for mysql.
The README shows how to use it with an SQLite backend.

Resources