Cassandra copy table into doesn't produce output - cassandra-2.1

I want to copy the data of my table into a csv file. But it never writes a single row. It just creates a 0B file and the process looks to be simply hung in the CQLSH terminal.
If I end this through ^c, it just prints
copy emp(empid,empname) to 'emp1.csv';
Using 1 child processes
Starting copy of ea_sc_ww_elf.emp with columns [empid, empname].
IOError:
IOError:
IOError:
IOError:
IOError:
IOError:
IOError:
IOError:
IOError:
Any configuration causing this ???
Please note: for testing, I created a simple employee table with 2 rows of data and only 2 columns empid, empname.

I Re-installed Cassandra in a different server and tried to export the data into csv, It worked fine.
All I can guess is this Might be related to some setting in cassandra.yaml with any of the below properties:
rpc_address
broadcast_rpc_address
I haven't got free time to experiment on this. I will leave a confirmation when I do.

Try to run cqlsh with "–debug" flag. You will have more detailed information, thath helps in troubleshooting.

Related

Push/Export large datframe from R to Vertica database

I have a dataframe of 10M rows which needs to be uploaded back from R to Vertica Database.
The DBwrite() function from DBI is running into memory issues and I have tried increasing memory to 16g by
options(java.parameters = c("-XX:+UseConcMarkSweepGC", "-Xmx16g"))
Still the process is running into memory issue. I am planning to use bulk copy option of vertica to copy the csv file to create the table.
I have created an empty table on vertica
When I am executing the query
dbSendQuery(vertica, "COPY hpcom_usr.VM_test FROM LOCAL \'/opt/mount1/musoumit/MarketBasketAnalysis/Code/test.csv\' enclosed by \'\"\' DELIMITER \',\' direct REJECTED DATA \'./code/temp/rejected.txt\' EXCEPTIONS \'./code/temp/exceptions.txt\'")
I am running into this error.
Error in .verify.JDBC.result(r, "Unable to retrieve JDBC result set", :
Unable to retrieve JDBC result set
JDBC ERROR: [Vertica]JDBC A ResultSet was expected but not generated from query "COPY hpcom_usr.VM_test FROM LOCAL '/opt/mount1/musoumit/MarketBasketAnalysis/Code/test.csv' enclosed by '"' DELIMITER ',' direct REJECTED DATA './code/temp/rejected.txt' EXCEPTIONS './code/temp/exceptions.txt'". Query not executed.
Please help with what i'm doing wrong here.
Vertica also provides STDIN option aswell. Link
Please help me how can I execute this.
My Environment.
CENT OS 7
R 3.6.3 (No R Studio here I have to execute this from CLI)
Tidyverse 1.0.x
Vertica driver 9.x
System 128GB Memory and 28Core system.
Your problem is that you fire dbSendQuery() , which lives with a following dbFetch() and a final dbClearResult() - but only for query SQL statements - those that actually return a result set.
Vertica's COPY <table> FROM [LOCAL] 'file.ext' ... command is treated like a DML command. And for those - as this docu says ...
https://www.rdocumentation.org/packages/DBI/versions/0.5-1/topics/dbSendQuery
.. you need to use dbSendStatement() for data manipulation statements.
Have a go at it that way - good luck ...
dbSendUpdate(vertica, "COPY hpcom_usr.VM_test FROM LOCAL \'/opt/mount1/musoumit/MarketBasketAnalysis/Code/test.csv\' enclosed by \'\"\' DELIMITER \',\' direct REJECTED DATA \'./code/temp/rejected.txt\' EXCEPTIONS \'./code/temp/exceptions.txt\'")
instead of dbSendQuery did the trick for me.

What is the filepath that a "Read CSV" operator needs to read a file from RapidMiner Server?

I have a RM Server running on a VM (Ubuntu) on top of my Win10 machine.
I have a process to read a .csv file and write its contents on a MySQL database on a MySQL Server which also runs on the same VM.
The problem is that the read file operator does not seem to be able to find the file.
Scenario1.
When I try as location-name in the read csv operator ../data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /root/../data/myFile.csv (No such file or directory)' does not exist.
Scenario2.
When I try as location-name in the read csv operator /apps/myApp/data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /apps/myApp/data/myFile.csv (No such file or directory)' does not exist.
What is the right filepath that I should give to the Read CSV operator?
Just to update with the answer. After David's suggestion, I resulted in storing the .csv file outside of the /rapidminer-server-home/data/repository since every remote repository seems to be depicted with an integer instead of its original name, making the use of the actual full path of the file not usable.
I would say, the issue is that depending on the location of the JobAgent that is executing your process, the relative path might be varying.
Is /apps/myApp/data/myFile.csv the correct path to the file? If not, I would suggest to use the absolute path to the file. Hope this helps.
Best,
David

Importing database to 000webhost and receiving the following error: #1064

Error: - You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '#', 1) LIMIT 1' at line 2
Thanks in advance.
I had the same problem.When you are importing whole DB then its quite difficult to find the error.
Check Your 'Option' Column. Problem Could Be In Dumping The Data There.
But I solved it in my way, I didn't import whole DB, I uploaded in pieces.Now what you have to do is
-Import Each Column Manually.
-Then Dump It Manually
There You Go.

fast export unexplained failure

I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.

Creating a Dump File for Oracle table

I intend to export one table from My database as a dmp File. This is what I am doing:
expdp SYSTEM/manager#UATDB FILE=F:\LLT.dmp log=F:\llt.log tables=TBAADM.LLT
The error I am getting is:
LRM-00101: unknown parameter name 'FILE'
What is my mistake. Please Help.
Ok Guys I fouund the Answer you need to first Create a Directory where the the dmp File is going to be. Then Import Like this:
expdp USERNAME#server ip/SERVICE_NAME
DIRECTORY=DIR_NAME DUMPFILE=FILE.dmp TABLES=SCHEMA.TABLE_NAME
And you can also use
Below one is shell command, so execute this in the shell. Once Data export is completed, the search dpdump folder in oracle installed directory where you can find your exported file with log.
expdp userid/pwd schemas=dbschema dumpfile=file.dmp logfile=file.log

Resources