I need to read a 10GB fixed width file to a dataframe. How can I do it using Spark in R?
Suppose my text data is the following:
text <- c("0001BRAjonh ",
"0002USAmarina ",
"0003GBPcharles")
I want the 4 first characters to be associated to the column "ID" of a data frame; from character 5-7 would be associated to a column "Country"; and from character 8-14 to be associated to a column "Name"
I would use function read.fwf if the dataset was small, but that is not the case.
I can read the file as a text file using sparklyr::spark_read_text function. But I don't know how to attribute the values of the file to a data frame properly.
EDIT: Forgot to say substring starts at 1 and array starts at 0, because reasons.
Going through and adding the code I talked about in the column above.
The process is dynamic and is based off a Hive table called Input_Table. The table has 5 columns: Table_Name, Column_Name, Column_Ordinal_Position, Column_Start, and Column_Length. It is external so any user can change, drop, and remove any file into the folder location. I quickly built this from scratch to not take actual code, does everything make sense?
#Call Input DataFrame and the Hive Table. For hive table we make sure to only take correct column as well as the columns in correct order.
val inputDF = spark.read.format(recordFormat).option("header","false").load(folderLocation + "/" + tableName + "." + tableFormat).rdd.toDF("Odd_Long_Name")
val inputSchemaDF = spark.sql("select * from Input_Table where Table_Name = '" + tableName + "'").sort($"Column_Ordinal_Position")
#Build all the arrays from the columns, rdd to map to collect changes a dataframe col to a array of strings. In this format I can iterator through the column.
val columnNameArray = inputSchemaDF.selectExpr("Column_Name").rdd.map(x=>x.mkString).collect
val columnStartArray = inputSchemaDF.selectExpr("Column_Start_Position").rdd.map(x=>x.mkString).collect
val columnLengthArray = inputSchemaDF.selectExpr("Column_Length").rdd.map(x=>x.mkString).collect
#Make the iteraros as well as other variables that are meant to be overwritten
var columnAllocationIterator = 1
var localCommand = ""
var commandArray = Array("")
#Loop as there are as many columns in input table
while (columnAllocationIterator <= columnNameArray.length) {
#overwrite the string command with the new command, thought odd long name was too accurate to not place into the code
localCommand = "substring(Odd_Long_Name, " + columnStartArray(columnAllocationIterator-1) + ", " + columnLengthArray(columnAllocationIterator-1) + ") as " + columnNameArray(columnAllocationIterator-1)
#If the code is running the first time it overwrites the command array, else it just appends
if (columnAllocationIterator==1) {
commandArray = Array(localCommand)
} else {
commandArray = commandArray ++ Array(localCommand)
}
#I really like iterating my iterators like this
columnAllocationIterator = columnAllocationIterator + 1
}
#Run all elements of the string array indepently against the table
val finalDF = inputDF.selectExpr(commandArray:_*)
Related
I am new to progress 4gl and below is the query used to add all fields from a table to dynamic temp table except few fields but I am not sure how to add only required fields to dynamic temp table. Please help to modify the query I shared.
/* p-ttdyn2.p - a join of 2 tables */
DEFINE VARIABLE tth4 AS HANDLE.
DEFINE VARIABLE btth4 AS HANDLE.
DEFINE VARIABLE qh4 AS HANDLE.
DEFINE VARIABLE bCust AS HANDLE.
DEFINE VARIABLE bOrder AS HANDLE.
DEFINE VARIABLE i AS INTEGER.
DEFINE VARIABLE fldh AS HANDLE EXTENT 15.
bCust = BUFFER customer:HANDLE.
bOrder = BUFFER order:HANDLE.
CREATE TEMP-TABLE tth4.
tth4:ADD-FIELDS-FROM(bCust,"address,address2,phone,city,comments").
tth4:ADD-FIELDS-FROM(bOrder,"cust-num,carrier,instructions,PO,terms").
tth4:TEMP-TABLE-PREPARE("CustOrdJoinTT").
btth4 = tth4:DEFAULT-BUFFER-HANDLE.
FOR EACH customer WHERE cust.cust-num < 6, EACH order OF customer:
btth4:BUFFER-CREATE.
btth4:BUFFER-COPY(bCust).
btth4:BUFFER-COPY(bOrder).
END.
/* Create Query */
CREATE QUERY qh4.
qh4:SET-BUFFERS(btth4).
qh4:QUERY-PREPARE("for each CustOrdJoinTT").
qh4:QUERY-OPEN.
REPEAT WITH FRAME zz DOWN:
qh4:GET-NEXT.
IF qh4:QUERY-OFF-END THEN LEAVE.
REPEAT i = 1 TO 15:
fldh[i] = btth4:BUFFER-FIELD(i).
DISPLAY fldh[i]:NAME FORMAT "x(15)"
fldh[i]:BUFFER-VALUE FORMAT "x(20)".
END.
END.
btth4:BUFFER-RELEASE.
DELETE OBJECT tth4.
DELETE OBJECT qh4.
ADD-FIELDS-FROM only supports excluding fields that are not needed. Instead you can use ADD-LIKE-FIELD multiple times:
CREATE TEMP-TABLE tth4.
tth4:ADD-LIKE-FIELD("address", "customer.address").
tth4:ADD-LIKE-FIELD("address2", "customer.address2").
tth4:ADD-LIKE-FIELD("phone", customer.phone").
...
tth4:ADD-LIKE-FIELD("cust-num", "Order.cust-num").
...
tth4:TEMP-TABLE-PREPARE("CustOrdJoinTT").
btth4 = tth4:DEFAULT-BUFFER-HANDLE.
Depending on your use case, you can also invert the required field list to an except field list:
var handle ht,hb.
var longchar lcjson.
function invertFields returns character (
i_hb as handle,
i_crequired as char
):
var char cexcept,cfield.
var int ic.
do ic = 1 to i_hb:num-fields:
cfield = i_hb:buffer-field( ic ):name.
if lookup( cfield, i_crequired ) = 0 then
cexcept = cexcept + ',' + cfield.
end.
return substring( cexcept, 2 ).
end function.
create temp-table ht.
ht:add-fields-from(
buffer customer:handle,
invertFields( buffer customer:handle, "CustNum,Name" )
).
ht:temp-table-prepare( 'tt' ).
hb = ht:default-buffer-handle.
hb:buffer-create().
assign
hb::CustNum = 1
hb::Name = 'test'
.
hb:write-json( 'longchar', lcjson, true ).
message string( lcjson ).
https://abldojo.services.progress.com/?shareId=624993253fb02369b25437c4
I want to insert a Dataframe in a table using ODBC.jl , the table already exists and it seems that i can't use the function ODBC.load with it (even with the append=true).
usually i insert DataFrames with copyIn by loading the dataframe as a csv but with just ODBC it seems i can't do that .
The last thing i found is :
stmt = ODBC.prepare(dsn, "INSERT INTO cool_table VALUES(?, ?, ?)")
for row = 1:size(df, 1)
ODBC.execute!(stmt, [df[row, x] for x = 1:size(df, 2)])
end
but this is line by line , it's incredibly long to insert everything .
I tried also to do it myself like this :
_prepare_field(x::Any) = x
_prepare_field(x::Missing) = ""
_prepare_field(x::AbstractString) = string(''', x, ''')
row_names = join(string.(Tables.columnnames(table)), ",")
row_strings = imap(Tables.eachrow(table)) do row
join((_prepare_field(x) for x in row), ",") * "\n"
end
query = "INSERT INTO $tableName($row_names) VALUES $row_strings"
#debug "Inserting $(nrow(df)) in $tableName)"
DBInterface.execute(db.conn, query)
but it throw me an error because of some "," not at the right place at like col 9232123
which i can't find because i have too many lines , i guess it's the _prepare_field that doesn't cover all possible strings escape but i can't find another way .
Did i miss something and there is a easier way to do it ?
Thank you
In u-sql script I must extract a variable from file to a dataset and then use it to form a name of output file. How can I get the variable from the dataset?
In details.
I have 2 input files: csv file with a set of fields and a dictionary file. The 1st file has file name like ****ClintCode*****.csv. The 2nd file-dictionary has 2 fields with mapping: ClientCode - ClintCode2. My task is extract ClientCode value from the file name, get ClientCode2 from the dictionary, insert it as a field to output file (implemented), and, moreover, form the name of output file as ****ClientCode2****.csv.
Dictionary csv file has the content:
OldCode NewCode
6HAA Alfa
CCVV Beta
CVXX gamma
? Davis
The question is how to get ClientCode2 into scalar variable to write an expression for the output file?
DECLARE #inputFile string = "D:/DFS_SSC_Automation/Tasks/FundInfo/ESP_FAD_GL_6HAA_20170930.txt"; // '6HAA' is ClientCode here that mapped to other code in ClientCode_KVP.csv
DECLARE #outputFile string = "D:/DFS_SSC_Automation/Tasks/FundInfo/ClientCode_sftp_" + // 'ClientCode' should be replaced with ClientCode from mapping in ClientCode_KVP.csv
DateTime.Now.ToString("yyyymmdd") + "_" +
DateTime.Now.ToString("HHmmss") + ".csv";
DECLARE #dictionaryFile string = "D:/DFS_SSC_Automation/ClientCode_KVP.csv";
#dict =
EXTRACT [OldCode] string,
[NewCode] string
FROM #dictionaryFile
USING Extractors.Text(skipFirstNRows : 1, delimiter : ',');
#theCode =
SELECT Path.GetFileNameWithoutExtension(#inputFile).IndexOf([OldCode]) >= 0 ? 1 : 3 AS [CodeExists],
[NewCode]
FROM #dict
UNION
SELECT *
FROM(
VALUES
(
2,
""
)) AS t([CodeExists],[NewCode]);
#code =
SELECT [NewCode]
FROM #theCode
ORDER BY [CodeExists]
FETCH 1 ROWS;
#GLdata =
EXTRACT [ASAT] string,
[ASOF] string,
[BASIS_INDICATOR] string,
[CALENDAR_DATE] string,
[CR_EOP_AMOUNT] string,
[DR_EOP_AMOUNT] string,
[FUND_ID] string,
[GL_ACCT_TYPE_IND] string,
[TRANS_CLIENT_FUND_NUM] string
FROM #inputFile
USING Extractors.Text(delimiter : '|', skipFirstNRows : 1);
// Prepare output dataset
#FundInfoGL =
SELECT "" AS [AccountPeriodEnd],
"" AS [ClientCode],
[FUND_ID] AS [FundCode],
SUM(GL_ACCT_TYPE_IND == "A"? System.Convert.ToDecimal(DR_EOP_AMOUNT) : 0) AS [NetValueOtherAssets],
SUM(GL_ACCT_TYPE_IND == "L"? System.Convert.ToDecimal(CR_EOP_AMOUNT) : 0) AS [NetValueOtherLiabilities],
0.0000 AS [NetAssetsOfSeries]
FROM #GLdata
GROUP BY FUND_ID;
// NetAssetsOfSeries calculation
#FundInfoGLOut =
SELECT [AccountPeriodEnd],
[NewCode] AS [ClientCode],
[FundCode],
Convert.ToString([NetValueOtherAssets]) AS [NetValueOtherAssets],
Convert.ToString([NetValueOtherLiabilities]) AS [NetValueOtherLiabilities],
Convert.ToString([NetValueOtherAssets] - [NetValueOtherLiabilities]) AS [NetAssetsOfSeries]
FROM #FundInfoGL
CROSS JOIN #code;
// Output
OUTPUT #FundInfoGLOut
TO #outputFile
USING Outputters.Text(outputHeader : true, delimiter : '|', quoting : false);
As David points out: You cannot assign query results to scalar variables.
However, we have a dynamic partitioned output feature in private preview right now that will give you the ability to generate file names based on column values. Please contact me if you want to try it out.
You can't. Please see Convert Rowset variables to scalar value.
You may still be able to achieve your ultimate goal in a different manner. Please consider re-writing your post with clear & concise language, small dataset, expected output, and a very minimal amount of code needed to repro - remove all details and nuances that aren't necessary to create a test case.
I am quite new to u-sql, trying to solve
str1=\global\europe\Moscow\12345\File1.txt
str2=\global.bee.com\europe\Moscow\12345\File1.txt
str3=\global\europe\amsterdam\54321\File1.Rvt
str4=\global.bee.com\europe\amsterdam\12345\File1.Rvt
case1:
how do i get just "\europe\Moscow\12345\File1.txt" from the strings variable str1 & str2, i want to just take ("\europe\Moscow\12345\File1.txt") from str1 and str2 then "Groupby(\global\europe\Moscow\12345)" and take the count of distinct files from the path (""\europe\Moscow\12345\")
so the output would be something like this:
distinct_filesby_Location_Date
to solve the above case i tried the below u-sql code but not quite sure whether i am writing the right script or not:
#inArray = SELECT new SQL.ARRAY<string>(
filepath.Contains("\\europe")) AS path
FROM #t;
#filesbyloc =
SELECT [ID],
path.Trim() AS path1
FROM #inArray
CROSS APPLY
EXPLODE(path1) AS r(location);
OUTPUT #filesbyloc
TO "/Outputs/distinctfilesbylocation.tsv"
USING Outputters.Tsv();
any help would you greatly appreciated.
One approach to this is to put all the strings you want to work with in a file, eg strings.txt and save it in your U-SQL input folder. Also have a file with the cities in you want to match, eg cities.txt. Then try the following U-SQL script:
#input =
EXTRACT filepath string
FROM "/input/strings.txt"
USING Extractors.Tsv();
// Give the strings a row-number
#input =
SELECT ROW_NUMBER() OVER() AS rn,
filepath
FROM #input;
// Get the cities
#cities =
EXTRACT city string
FROM "/input/cities.txt"
USING Extractors.Tsv();
// Ensure there is a lower-case version of city for matching / joining
#cities =
SELECT city,
city.ToLower() AS lowercase_city
FROM #cities;
// Explode the filepath into separate rows
#working =
SELECT rn,
new SQL.ARRAY<string>(filepath.Split('\\')) AS pathElement
FROM #input AS i;
// Explode the filepath string, also changing to lower case
#working =
SELECT rn,
x.pathElement.ToLower() AS pathElement
FROM #working AS i
CROSS APPLY
EXPLODE(pathElement) AS x(pathElement);
// Create the output query, joining on lower case city name, display, normal case name
#output =
SELECT c.city,
COUNT( * ) AS records
FROM #working AS w
INNER JOIN
#cities AS c
ON w.pathElement == c.lowercase_city
GROUP BY c.city;
// Output the result
OUTPUT #output TO "/output/output.txt"
USING Outputters.Tsv();
//OUTPUT #working TO "/output/output2.txt"
//USING Outputters.Tsv();
My results:
HTH
Taking the liberty to format your input file as TSV file, and not knowing all the column semantics, here is a way to write your query. Please note that I made the assumptions as provided in the comments.
#d =
EXTRACT path string,
user string,
num1 int,
num2 int,
start_date string,
end_date string,
flag string,
year int,
s string,
another_date string
FROM #"\users\temp\citypaths.txt"
USING Extractors.Tsv(encoding: Encoding.Unicode);
// I assume that you have only one DateTime format culture in your file.
// If it becomes dependent on the region or city as expressed in the path, you need to add a lookup.
#d =
SELECT new SqlArray<string>(path.Split('\\')) AS steps,
DateTime.Parse(end_date, new CultureInfo("fr-FR", false)).Date.ToString("yyyy-MM-dd") AS end_date
FROM #d;
// This assumes your paths have a fixed formatting/mapping into the city
#d =
SELECT steps[4].ToLowerInvariant() AS city,
end_date
FROM #d;
#res =
SELECT city,
end_date,
COUNT( * ) AS count
FROM #d
GROUP BY city,
end_date;
OUTPUT #res
TO "/output/result.csv"
USING Outputters.Csv();
// Now let's pivot the date and count.
OUTPUT #res2
TO "/output/res2.csv"
USING Outputters.Csv();
#res2 =
SELECT city, MAP_AGG(end_date, count) AS date_count
FROM #res
GROUP BY city;
// This assumes you know exactly with dates you are looking for. Otherwise keep it in the first file representation.
#res2 =
SELECT city,
date_count["2016-11-21"]AS [2016-11-21],
date_count["2016-11-22"]AS [2016-11-22]
FROM #res2;
UPDATE AFTER RECEIVING SOME EXAMPLE DATA IN PRIVATE EMAIL:
Based on the data you sent me (after the extraction and counting of the cities that you either could do with the join as outlined in Bob's answer where you need to know your cities in advance, or with the taking the string from the city location in the path as in my example, where you do not need to know the cities in advance), you want to pivot the rowset city, count, date into the rowset date, city1, city2, ... were each row contains the date and the counts for each city.
You could easily adjust my example above by changing the calculations of #res2 in the following way:
// Now let's pivot the city and count.
#res2 = SELECT end_date, MAP_AGG(city, count) AS city_count
FROM #res
GROUP BY end_date;
// This assumes you know exactly with cities you are looking for. Otherwise keep it in the first file representation or use a script generation (see below).
#res2 =
SELECT end_date,
city_count["istanbul"]AS istanbul,
city_count["midlands"]AS midlands,
city_count["belfast"] AS belfast,
city_count["acoustics"] AS acoustics,
city_count["amsterdam"] AS amsterdam
FROM #res2;
Note that as in my example, you will need to enumerate all cities in the pivot statement by looking it up in the SQL.MAP column. If that is not known apriori, you will have to first submit a script that creates the script for you. For example, assuming your city, count, date rowset is in a file (or you could just duplicate the statements to generate the rowset in the generation script and the generated script), you could write it as the following script. Then take the result and submit it as the actual processing script.
// Get the rowset (could also be the actual calculation from the original file
#in = EXTRACT city string, count int?, date string
FROM "/users/temp/Revit_Last2Months_Results.tsv"
USING Extractors.Tsv();
// Generate the statements for the preparation of the data before the pivot
#stmts = SELECT * FROM (VALUES
( "#s1", "EXTRACT city string, count int?, date string FROM \"/users/temp/Revit_Last2Months_Results.tsv\" USING Extractors.Tsv();"),
( "#s2", "SELECT date, MAP_AGG(city, count) AS city_count FROM #s1 GROUP BY date;" )
) AS T( stmt_name, stmt);
// Now generate the statement doing the pivot
#cities = SELECT DISTINCT city FROM #in2;
#pivots =
SELECT "#s3" AS stmt_name, "SELECT date, "+String.Join(", ", ARRAY_AGG("city_count[\""+city+"\"] AS ["+city+"]"))+ " FROM #s2;" AS stmt
FROM #cities;
// Now generate the OUTPUT statement after the pivot. Note that the OUTPUT does not have a statement name.
#output =
SELECT "OUTPUT #s3 TO \"/output/pivot_gen.tsv\" USING Outputters.Tsv();" AS stmt
FROM (VALUES(1)) AS T(x);
// Now put the statements into one rowset. Note that null are ordering high in U-SQL
#result =
SELECT stmt_name, "=" AS assign, stmt FROM #stmts
UNION ALL SELECT stmt_name, "=" AS assign, stmt FROM #pivots
UNION ALL SELECT (string) null AS stmt_name, (string) null AS assign, stmt FROM #output;
// Now output the statements in order of the stmt_name
OUTPUT #result
TO "/pivot.usql"
ORDER BY stmt_name
USING Outputters.Text(delimiter:' ', quoting:false);
Now download the file and submit it.
How can I get a single row result (e.g. in form of a table/array) back from a sql statement. Using Lua Sqlite (LuaSQLite3). For example this one:
SELECT * FROM sqlite_master WHERE name ='myTable';
So far I note:
using "nrows"/"rows" it gives an iterator back
using "exec" it doesn't seem to give a result back(?)
Specific questions are then:
Q1 - How to get a single row (say first row) result back?
Q2 - How to get row count? (e.g. num_rows_returned = db:XXXX(sql))
In order to get a single row use the db:first_row method. Like so.
row = db:first_row("SELECT `id` FROM `table`")
print(row.id)
In order to get the row count use the SQL COUNT statement. Like so.
row = db:first_row("SELECT COUNT(`id`) AS count FROM `table`")
print(row.count)
EDIT: Ah, sorry for that. Here are some methods that should work.
You can also use db:nrows. Like so.
rows = db:nrows("SELECT `id` FROM `table`")
row = rows[1]
print(row.id)
We can also modify this to get the number of rows.
rows = db:nrows("SELECT COUNT(`id`) AS count FROM `table`")
row = rows[1]
print(row.count)
Here is a demo of getting the returned count:
> require "lsqlite3"
> db = sqlite3.open":memory:"
> db:exec "create table foo (x,y,z);"
> for x in db:urows "select count(*) from foo" do print(x) end
0
> db:exec "insert into foo values (10,11,12);"
> for x in db:urows "select count(*) from foo" do print(x) end
1
>
Just loop over the iterator you get back from the rows or whichever function you use. Except you put a break at the end, so you only iterate once.
Getting the count is all about using SQL. You compute it with the SELECT statement:
SELECT count(*) FROM ...
This will return one row containing a single value: the number of rows in the query.
This is similar to what I'm using in my project and works well for me.
local query = "SELECT content FROM playerData WHERE name = 'myTable' LIMIT 1"
local queryResultTable = {}
local queryFunction = function(userData, numberOfColumns, columnValues, columnTitles)
for i = 1, numberOfColumns do
queryResultTable[columnTitles[i]] = columnValues[i]
end
end
db:exec(query, queryFunction)
for k,v in pairs(queryResultTable) do
print(k,v)
end
You can even concatenate values into the query to place inside a generic method/function.
local query = "SELECT * FROM ZQuestionTable WHERE ConceptNumber = "..conceptNumber.." AND QuestionNumber = "..questionNumber.." LIMIT 1"