how to create .npy file from SAC or mseed files - obspy

how to create .npy files for both single trace and multiple traces of SAC or mseed files of seismographs.

Related

How modify numbers in Excel using R?

I have an Excel file test.xlsx on my desktop and have a dataframe in R. How can I overwrite numbers in Excel file sheet "Capitals"? Say table in excel starts at B6 and has the same size as table in R.
I tried to run following command, however, it creates a new excel file, however I need to make changes in the existing one.
write.xlsx(df, file = "//Desktop//test.xlsx",
sheetName = "Capitals", append = TRUE)

How to write streamlit UploadedFile to temporary directory with original filename?

Streamlit has a function that allows convenient upload of multiple files.
files = st.file_uploader('File upload', type=['txt'],accept_multiple_files=True)
Then files contains a list of UploadedFile objects which are ByteIO like. Though it is not clear how to get the filenames of the original files and write the file to a temporary directory. It is also not clear if that approach would conflict with the way streamlit operates. It basically reruns the underlying script every time an action is performed.
I am using some tools that read files based on their path given as a string. They are expected to be read from the hard drive.
You can access the name of the file with files[i].name and its content with files[i].read().
It looks like this in the end:
import os
import streamlit as st
files = st.file_uploader("File upload", type=["txt"], accept_multiple_files=True)
if len(files) == 0:
st.error("No file were uploaded")
for i in range(len(files)):
bytes_data = files[i].read() # read the content of the file in binary
print(files[i].name, bytes_data)
with open(os.path.join("/tmp", files[i].name), "wb") as f:
f.write(bytes_data) # write this content elsewhere

How to automate the process of unzip steps in RStudio

I have downloaded the transportation history data. The data for each year contain the same numbers of files with exactly same name. Each year's data was zipped in a single files. I am trying to automate the process of unzipping.
for example: I have three zip files named (2014.zip, 2013.zip, 2012.zip) and each zip file contains three files(car.csv, truck.csv, train.csv). What I want is to unzip these files in their corresponding folders which will be created on the fly. How can I automate this process in RStudio? Thanks.
lapply(filenames, function(x)){
foldername<-substr(filename, 1, nchar(filename)-4)
if (file.exists(x)==FALSE){
download.file(url, x)
}
if (file.exists(foldername)==FALSE){
dir.create(foldername)
}
unzip(x)
for (file in list.files(pattern="*.dbf")){
file.copy(file,foldername)
file.remove(file)
}}

DEM to Raster for multiple files

I'm trying to design a program to help me convert 1000+ DEM file into USGS raster file, using the method "arcpy.DEMtoRaster_Conversion" in ArcGIS. My idea is to use a OpenFileDialog to allow multiple selection for these files, then use an array to same these names and use these names as the inDEM and save the outRaster in tif format.
file_path = tkFileDialog.askopenfilename(filetypes=(("DEM", "*.dem"),),multiple=1)
this is how I open multiple files in the dialog, but I;m not sure how to save them so as to fulfill the following steps. Can someone help me?
This code will find all dems in a folder and apply the conversion function and save the output tiffs to another folder
#START USER INPUT
datadir="Y:/input_rasters/" #directory where dem files are located
outputdir="Y:/output_rasters/" #existing directory where output tifs are to be saved in
#END USER INPUT
import os
arcpy.env.overwriteOutput = True
arcpy.env.workspace = datadir
arcpy.env.compression = "LZW"
DEMList = arcpy.ListFiles("*.dem")
for f in DEMList:
print "starting %s" %(f)
rastername=os.path.join(datadir, f)
outrastername=os.path.join(outputdir, f[:-4]+".tif")
arcpy.DEMToRaster_conversion(rastername, outrastername)

How to use R to Iterate through Subfolders and bind CSV files of the same ID?

I am stuck. I need a way to iterate through a bunch of subfolders in a directory, pull out 4 .csv files , bind the contents of those 4 .csv files, then write out the new .csv to a new directory using the name of the initial subfolder as the name of the new .csv.
I know R could do this. But I am stuck at how to iterate across the subfolders and bind the csv files together. My obstacle is that each subfolder contains the same 4 .csv files using the same 8-digit id. For example, subfolder A contains 09061234.csv, 09061345.csv, 09061456.csv, and 09061560.csv. subfolder B contains 9061234.csv, 09061345.csv, 09061456.csv, and 09061560.csv. (...). There are 42 subfolders, and hence 168 csv files with the same names. I want to compact the files down to 42.
I can use list.files to retrieve all the subfolders. But then what?
##Get Files from directory
TF = "H:/working/TC/TMS/Counts/June09"
##List Sub folders
SF <- list.files(TF)
##List of File names inside folders
FN <- list.files(SF)
#Returns list of 168 filenames
###?????###
#How to iterate through each subfolder, read each 8-digit integer id file,
#bind them all together into one single csv,
#Then write to new directory using
#the name of the subfolder as the name of the new csv?
There is probably a way to do this easily but I am a noob with R. Something involving functions, paste and write.table perhaps? Any hints/help/suggestions is greatly appreciated. Thanks!
You can use recursive=T option for list.files,
lapply(c('1234' ,'1345','1456','1560'),function(x){
sources.files <- list.files(path=TF,
recursive=T,
pattern=paste('*09061*',x,'*.csv',sep='')
,full.names=T)
## ou read all files with the id and bind them
dat <- do.call(rbind,lapply(sources.files,read.csv))
### write the file for the
write(dat,paste('agg',x,'.csv',sep='')
}
After some tweaking of agstudy's code, I came up with the solution I was ultimately after. There were a couple of missing pieces that are more due to the nature of my specific problem, so I am leaving agstudy's answer as "accepted".
Turns out a function really wasn't needed. At least not for now. If I need to perform this same task again, I will create a function out of it. For now, I can solve this particular problem without it.
Also, for my instance, I needed a conditional "if" statement to handle any non-csv files that may have lived in the subfolders. By adding an if statement, R throws warnings and skips any files that are not comma-separated.
Code:
##Define directory path##
TF = "H:/working/TC/TMS/Counts/June09"
##List of subfolder files where file name starts with "0906"##
SF <- list.files(TF,recursive=T, pattern=paste("*09061*",x,'*.csv',sep=""))
##Define the list of files to search for##
x <- (c('1234' ,'1345','1456','1560')
##Create a conditional to skip over the non-csv files in each folder##
if (is.integer(x)){
sources.files <- list.files(TF, recursive=T,full.names=T)}
dat <- do.call(rbind,lapply(sources.files,read.csv))
#the warnings thrown are ok--these are generated due to the fact that some of the folders contain .xls files
write.table(dat,file="H:/working/TC/TMS/June09Output/June09Batched.csv",row.names=FALSE,sep=",")

Resources