Reading all observations from a csv file [closed] - r

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
i have imported this file into R only problem is there is 380 observations and it only reads first 100 observations. How can i get the rest of it, here it is
BPL16_17 <- read.csv("BPL16:17.csv")
BPL16_17
Thanks

Personally I always recommend using readr::read_csv over read.csv.
While I am unsure why read.csv is limited to 100 columns (This has not been true for many years now, my mistake) read_csv is not and handles data_frames much better especially dates, times and doesn't include factors by default.
https://github.com/tidyverse/readr
Also a great resource is this chapter from the R for data science book which is available online always for free.
http://r4ds.had.co.nz/data-import.html

Related

Computed variable gives unexpected counts [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 days ago.
Improve this question
I have a problem with the coding of some variables. I am working on data for Lebanon on R on two different datasets, the World Value Survey and the Arab Barometer. Regardless of the dataset I am using, when I try to code a variable referring only to one country (in this case Lebanon), the values of the variable at the end of the coding are entirely wrong.
I have tried the same coding with other variables and with another dataset, but the problem remains, and the values are still much larger than they should be.
As can be seen from the values in the 'table' command, the values after encoding are very different.
As a beginner, I'm sure my question will be trivial, but I'm asking for help to unblock the situation.

How to create this chart in R? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I have a task a bit complicated for my knowledge in R. I need to reproduce this graphic of the figure in R, I performed several searches and could not find anything. The main thing is to be able to reproduce the graphic (it doesn't have to be identical), subtitles are not so important. Any ideas on how to do it or just using another program? Thanks!!
Check also the facet_share() function of the ggpol package, very handy for population pyramids/comparisons

R cannot export all rows to csv [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 4 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Improve this question
Fundamental stuff but i couldn't seem to get around this. I performed the following process:
d1<-read.csv('hourly.csv',sep=",",header=F)
names(d1)<-c("date","rain","q","qa","qb")
d2<-read.csv('event.csv',sep=",",header=F)
names(d2)<-c("enum","st","et","rain2","qtot")
for(k in 1:206){
st<-d2[k,2]
et<-d2[k,3]
Datetime<-d1[st,]
print(Datetime)
write.csv(Datetime, file="DatesA3.csv")
}
In the end, i exported the results to a csv file. There are 206 rows altogether and they display fine in R. But when exporting, only the last row is exported in the csv file. I tried multiple things such at write.table, append, etc. but nothing seems to work.
How do i export every row into one file?
Please advise and thank you!
Datetime[k, ] <- d1[st, ] # instead, otherwise you overwrite
# and write the result outside the loop

Find duplicate registers in R [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have an excel file with a list of emails and channels that collected it. How can I know how many emails per channel are duplicated using R and automate it (every time I import a different file just have to run it and get the results ) ?
Thank you!!
Assuming the "df" dataframe has the relevant variables under the names "channel" and "email", then:
To get the number of unique channel-email pairs:
dim(unique(df[c("channel", "email")]))[1]
To get the sum of all channel-email observations:
sum(table(df$channel, df$email))
To get the number of duplicates, simply subtract the former from the later:
sum(table(df$channel, df$email)) - dim(unique(df[c("channel", "email")]))[1]

Excel messes up some dots (".") in a number [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
I have a tab delimited file:
I created this file in R as a data.frame and wrote it to the above file using write.table(dataFrame,"filepath",row.names=FALSE). However after I opened this in excel I got some ##### in my excel file:
The only difference between the tab del file and the excel file is that in the excel file the . is omitted, but I don't have any idea how this is possible because most of the other numbers are just fine. Any suggestion to fix this problem is welcome.
Update
I can fit the data in the column:
However there should be a . after the 1
Probably your import settings are wrong regarding the seperation for thousands and decimals. Notice that the problem arises when the first number is >1. Excel interprets a number as a thousand if the first number is > 1 , because it woudln't make sense for excel to convert a number which begins with a 0 to a thousand. So you have to fix this:
You have to do this while importing the file in the last step, you have to click on Advanced and then set the Decimal seperator to: . and the Thousands seperator to: , (or visa versa, it's what you prefer offcourse but in your case it has to be this)

Resources