Using a SQL style approach to Liquibase changesets (which is our codestyle, we don't use XML) I am trying to load an CSV file using the following SQL changeset
SQL
-- changeset user:insert-prices-data-temp-table
LOAD DATA LOCAL INFILE 'foo/src/main/resources/liquibase/changelogs/2021/prices.csv'
INTO TABLE prices_temp
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
When deploying the WAR file to Wildfly the following exception
java.io.FileNotFoundException:
foo/src/resources/liquibase/changelogs/2021/prices.csv (No such file
or directory) 2021-09-22T20:52:18.581085123Z at
java.io.FileInputStream.open0(Native Method)
2021-09-22T20:52:18.581087214Z at
java.io.FileInputStream.open(FileInputStream.java:195)
2021-09-22T20:52:18.581089768Z at
java.io.FileInputStream.(FileInputStream.java:138)
2021-09-22T20:52:18.581092029Z at
java.io.FileInputStream.(FileInputStream.java:93)
2021-09-22T20:52:18.581094150Z at
com.mysql.jdbc.MysqlIO.sendFileToServer(MysqlIO.java:3772)
The Master Change log file references the 2021 directory with the SQL and CSV file existing in the same directory.
<includeAll path="liquibase/changelogs/2021/" filter="xml" errorIfMissingOrEmpty="true" />
I have tried the following other paths but they all still yield a FileNotFoundException
prices.csv
liquibase/changelogs/2021/prices.csv
WEB-INF/classes/liquibase/changelogs/2021/CCC-220-marking-time-prices.csv
(Absolute path) /home/xxxx/Work/foo-service/foo/src/main/resources/liquibase/changelogs/2021/prices.csv
Liquibase Version: 3.5.1
Wildfly Jboss Version : 21
I have checked the CSV file is present in the WAR file
Any ideas how to fix this?
The problem is that you are trying to use some mysql function LOAD DATA LOCAL INFILE which doesn't know about your classpath. It's trying to look at your filesystem and such path doesn't exists. Even if you provide something like yourapp.jar!liquibase/changelogs/2021/prices.csv it won't be able to read that file. You will need to pull prices.csv out of your application to filesystem and point mysql function to that location.
Or you can use liquibase's loadData if that helps.
Related
I have a RM Server running on a VM (Ubuntu) on top of my Win10 machine.
I have a process to read a .csv file and write its contents on a MySQL database on a MySQL Server which also runs on the same VM.
The problem is that the read file operator does not seem to be able to find the file.
Scenario1.
When I try as location-name in the read csv operator ../data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /root/../data/myFile.csv (No such file or directory)' does not exist.
Scenario2.
When I try as location-name in the read csv operator /apps/myApp/data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /apps/myApp/data/myFile.csv (No such file or directory)' does not exist.
What is the right filepath that I should give to the Read CSV operator?
Just to update with the answer. After David's suggestion, I resulted in storing the .csv file outside of the /rapidminer-server-home/data/repository since every remote repository seems to be depicted with an integer instead of its original name, making the use of the actual full path of the file not usable.
I would say, the issue is that depending on the location of the JobAgent that is executing your process, the relative path might be varying.
Is /apps/myApp/data/myFile.csv the correct path to the file? If not, I would suggest to use the absolute path to the file. Hope this helps.
Best,
David
I am trying to use lua to access redis values from nginx. When i execute lua files on command line there everything is ok i am able to read and write values to redis. But i when try to execute the same files from nginx by accessing a location in which access_by_lua directive is written the following error logged in error log file
no field package.preload['socket']
no file '/home/sivag/redis/redis-lua/src/socket.lua'
no file 'src/socket.lua'
no file '/home/sivag/lua/socket.lua'
no file '/opt/openresty/lualib/socket.so'
no file './socket.so'
no file '/usr/local/lib/lua/5.1/socket.so'
no file '/opt/openresty/luajit/lib/lua/5.1/socket.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
What is the reason for this and how can i resolve this?
In my case I just needed to install the lua-socket package, as the socket library is not built into the default Lua installation like it is in some other languages.
You get this error because your code executes the command require("socket")
This command will search for a file with that name in several directories. If successful the content will be executed as Lua code. If it is not successful you'll end up with your error message.
In order to fix this you have to add the path containing the file either to the system variable LUA_PATH or you have to add it to the global table package.path befor you require the file.
Lua will replace ? with the name you give to require()
For example
package.path = package.path .. ";" .. thisPathContainsTheLuaFile .. "?.lua"
Please read:
http://www.lua.org/manual/5.3/manual.html#pdf-require
https://www.lua.org/pil/8.1.html
I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.
I am trying to import oracle dump in Oracle 11g XE by using the below command
imp system/manager#localhost file=/home/madhu/test_data/oracle/schema_only.sql full=y
Getting like below
IMP-00037: Character set marker unknown
IMP-00000: Import terminated unsuccessfully
Any one please help me
You received IMP-00037 error because of export file corrupted. I'd suspect either your dump file is corrupted or the dump file was not created by exp utility.
If the issue was occured because of corrupted dump file, then there is no choice other than obtaining uncorrupted dump file. Use impdp utility to import if you have used expdp utility to prepare dumpfile.
Following link will be helpful to try other option:
https://community.oracle.com/thread/870104?start=0&tstart=0
https://community.oracle.com/message/734478
If you are not sure which command(exp/expdp) was used, you could check log file which was created during dump export. It contains exact command which was executed to prepare the dump file.
We are migrating from CDH3 to CDH4 and as part of this migration we are moving all the jobs that we have on CDH3. We have noticed one critical issue in this, when a work flow is executed through oozie for executing a python script which internally invoked a hive query(hive -e {query}), here in this hive query we are adding a custom jar using add jar {LOCAL PATH FOR JAR}, and created a temporary function for custom udf. And it looks ok till here. But when the query started executing with custom udf funtion it is failing with Distributed cache, File Not Found Exception which is looking for jar in the HDFS path instead of lookig in local path.
I am not sure if I am missing some configuration here.
Execption Trace:
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated.
Please use org.apache.hadoop.log.metrics.EventCounter in all the
log4j.properties files. Execution log at:
/tmp/yarn/yarn_20131107020505_79b41443-b9f4-4d36-a0eb-4f0d79cd3ce9.log
java.io.FileNotFoundException: File does not exist:
hdfs://aa.bb.com:8020/opt/nfsmount/mypath/custom.jar
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:824)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
..... .....
any help on this is highly appreciated.
Regards,
GHK.
There are some few options. All the required jar should be in the classpath before you run hive query.
option 1: Add your custom jar by <file>/hdfs/path/to/your/jar</file> in oozie workflow
option 2: use attribute --auxpath /local/path/to/your/jar while calling your hive script in python. Eg: hive --auxpath /local/path/to/your.jar -e {query}