I have a psv file for some config for my program and it looks really bad. When I save this config from kdb it removes all the white space and it looks like this:
column1|column2|column3
somevalue1|somevalue2|somevalue3
somevalue4|somevalue5|somevalue6
and I want it to look like this when I open it with a text editor like notepad or visual studio code
column1 |column2 |column3
somevalue1 |somevalue2 |somevalue3
somevalue4 |somevalue5 |somevalue6
This psv file has 20000 rows so please don't tell me I have to do this manually.
The reason for which I need to do this spacing is because sometimes my colleagues need to open it to modify just one thing and it is much more readable with the spacing.
I work on linux and I know kdb,python and R so anything in those languages that could help me?
This is quite crude but this q function will take an existing psv file and pad it out:
pad:{m:max each count#''a:flip"|"vs/:read0 x;x 0:"|"sv/:flip m$a}
It works by taking the max string length of each column and padding the rest of the values to the same width using $. The columns are then stitched back together and saved.
Taking this example file contents:
column1|column2|column3
somevalue1|somevalue2|somevalue3
somevalue4|somevalue5|somevalue6
somevalue7|somevalue8|somevalue9
Passing through pad in an open q session:
pad `:sample.psv
Gives the result:
column1 |column2 |column3
somevalue1|somevalue2|somevalue3
somevalue4|somevalue5|somevalue6
somevalue7|somevalue8|somevalue9
Related
I am new to Julia so sorry if this question is obvious.
I am trying to use Julia to help me run a series of finite element models, which use a text input file to give instructions to the finite element solver. Basically, I would like to use Julia to read in the base input file, edit some parameters on some lines of the file and then write it as a new file. I am getting hung up on a couple things though.
Currently, I am reading in the file like this
mdl = "fullmodelSVTV"; #name of input file
A = readlines(mdl*".inp")
This read each line from the file in as a separate string in a vector which I like because it makes it easier to edit the sections I want but it also makes things more difficult when I try to write to a new file.
I am writing the file like this.
io = open("name.inp","w")
print(io,A)
close(io)
When I try to write to a new file the output ends up look like this
Output from code
which is ["string at index 1","string at index 2","string at index 3"...].
What I would like to do is output this the exact same way is it is read in with string at each index of the vector on its own line. I would also like to remove the brackets and quotation marks from the file, as they might interfere with the finite element solver.
I think I have found a way to concatenate all of the strings at each index and separated them with a new line like shown below.
for i in 1:length(A)
conc = conc*"\n"*lines[i]
end
The issue with this is that it takes a long time to do given the size of the input files I am working with and I feel like there has to achieve my goal.
I also cannot find a way to remove the brackets or quotation marks when writing the file.
So, I'm wondering if anyone has any advice for a better way to write these text files in terms of both concatenating all of the strings from the vector when outputting as well as outputting without the brackets and quotation marks.
Thanks, any advice is appreciated.
The issue with print(io,A) is that it is printing a representation of the vector, but in fact you want to print each element of the vector. To do so, you can simply print each line in a loop:
open("name.inp", "w") do io
for line in A
println(io, line)
end
end
This avoids the overhead of string concatenation.
The appearance of "textparcali" in RStudio Source Editor was as follows.
In textparcali (tbl_df), I ran the following code to delete single strings.
textparcali$word<-gsub("\\W*\\b\\w\\b\\W*",'', textparcali$word)
But the deletion was interesting. You can see the picture below. Please note lines 67 and 50.
Everything was fine for line 50 and lines like that. However, this was not the case for line 67 (and I think there are others like it).
I focused on one line(67) to understand why you deleted it wrong. I've already seen what it says on this line in the editor. But I also wanted to look at the console. I wrote the following code to the console.
textparcali$word[67]
The word on line 67 looks different in the console. The value that doesn't appear when you make a copy paste but surprisingly appears on the console:
The reason I put it as a picture is because this character disappears after the copy-paste command.
You can download the file containing this character from the link below. However, you should open it with Notepad ++.
Character.txt
Gsub did his job right. How is that possible? What's the name of this character? When I try to write code that destroys this character, the " sign changes and does not delete.
textparcali$word<-gsub('[[:punct:]]+',' ',textparcali$word) command also does not work.
What is the explanation of my experience? I do not know. Is there a way to destroy this character? What caused this? I ve asked a lot.
Thank you all.
(I apologize for the bad scribbles in the pictures.)
I found the surprise character.
Above Right, Combining Dot ͘ ͘
The following is the code required to eliminate this character.
c<-"surprise character"
c
[1] "\u0358"
textparcali$word<-gsub("\u0358","",textparcali$word,ignore.case = FALSE)
textparcali$word<-gsub("\u307","",textparcali$word,ignore.case = FALSE)
Code 307 did the job for me. However, you should determine what the actual code is. If not, your character code may be incorrect.
More detailed information can be found in the links below.
https://gist.github.com/ngs/2782436
https://www.charbase.com/0358-unicode-combining-dot-above-right
Thanks a lot!
I am trying to run teradata fexp with a simple sql script.
The select output column is a string expression and as such results in 2 extra length indicator bytes at the start of each row output.
I have searched for solutions online to the problem. I would like to avoid having to post-process if possible.
There is a thread suggesting the possibility of using an OUTMOD. I don't know what that is.
https://forums.teradata.com/forum/tools/fastexport-remove-binaryindicator-values-in-outmod
http://teradataforum.com/teradata/20100726_155313.htm
And yet another thread suggests casting to a fixed width string type but this would result in padding which I'd like to avoid.
https://forums.teradata.com/forum/tools/fexp-data-doubt
The desired output is actually a delimited plain text file. Is there a way to do it?
I'm having trouble reading this table into R:
http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt
I tried all of the following:
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt")
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",skip=7,header=FALSE)
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",skip=8,header=FALSE)
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",skip=10,header=FALSE)
If I tell it that the separator is a tab, i get the wrong table:
d = read.table(file="http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",header=FALSE,skip=7,sep="\t")
the only thing that seems to work is readLines. but then i don't know how to get a data.frame out of each line.
d =readLines("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt")
any suggestions? thanks.
I agree that read.fwf will work, once you've worked out the widths.
But, Yeah -- I just hate people who allow whitespace inside elements (e.g. "SouthDakota" ) . One other thing you can do is edit the source text file, replacing {2,N} spaces with a tab. That will leave the state names as-is but give you a workable delimiter.
I have a csv, and each line reads as follows:
"http://www.videourl.com/video,video title,video duration,thumbnail,<iframe src=""http://embed.videourl.com/video"" frameborder=0 width=510 height=400 scrolling=no> </iframe>,tag 1,tag 2",,,,,,,,,,,,,,,,,,,,,,,,,,
Is there a program I can use to clean this up? I'm trying to import it to wordpress and map it to current fields, but it isn't functioning properly. Any suggestions?
Just use search and replace in this case. remove the commas at the end and then replace the remaining commas with ",".
Should anyone else have the same issue. Know that this solution will only work with data much like the example giving. If data has a lot of text and there are commas within the text that need kept. Then search replacing comma will not work. Using regex would be the next option and that can be done in Notepad ++
However I think the regex pattern depends on the data so not much point creating an example.
PHP could be used to explode each line also. Remove values that match a regex out of many i.e. URL, money. Then what is left could be (depending on the data again) just a block of text. That approach may not work if there are two or more columns with a lot of text