choose.files() defaulting to last directory used - r

According to the documentation for the choose.files function:
choose.files(default = "", caption = "Select files",
multi = TRUE, filters = Filters,
index = nrow(Filters))
If you would like to display files in a particular directory, give a
fully qualified file mask (e.g., "c:\*.*") in the default argument.
If a directory is not given, the dialog will start in the current
directory the first time, and remember the last directory used on
subsequent invocations.
My code is as follows:
AZPic = choose.files(default = "", caption = "Select Azimuth Picture", multi = FALSE, filters = Filters[c("All","jpeg"),])
STSPic = choose.files(default = "", caption = "Select Side to Side Elevation Picture", multi = FALSE, filters = Filters[c("All","jpeg"),])
FTBPic = choose.files(default = "", caption = "Select Front to Back Elevation Picture", multi = FALSE, filters = Filters[c("All","jpeg"),])
OrientationPic = choose.files(default = "", caption = "Select Orientation Picture", filters = Filters[c("All","jpeg"),])
Now when I run this code, it defaults to my home directory for all 4 calls.
It starts there for the first call of course, but afterwards should it not remember the folder I navigate to?
I typically will hop through another 3 or 4 folders to find the pictures for the job, but all of them for each program use will be in the same folder.
Is there something I'm missing?
Thanks in advance.

Related

Telegraf JSONV2 - Optional paths or missing paths

I have Telegraf pulling data from OpenWeather API and returning JSON. On occasion there is a missing tag such as rain. If there is no rain predicted then it's not there
My config:
[[inputs.http]] urls =
["https://api.openweathermap.org/data/3.0/onecall?units=metric&lat=51.123456&lon=-0.123456&appid=xxxx&exclude=current,hourly,alerts,minutely"]
interval = "30s"
tagexclude = ["url", "host"]
data_format = "json_v2"
[[inputs.http.json_v2]]
measurement_name = "openweather"
timestamp_path = "daily.0.dt"
timestamp_format = "unix"
[[inputs.http.json_v2.field]]
path = "daily.0.clouds"
[[inputs.http.json_v2.field]]
path = "daily.0.temp.day"
[[inputs.http.json_v2.field]]
path = "daily.0.rain"
I cannot work out how to tell the input that the tag is optional. Either null or 0 would would be fine

R: search_fullarchive() and Twitter Academic research API track

I was wondering whether anyone has found a way to how to use search_fullarchive() from the "rtweet" package in R with the new Twitter academic research project track?
The problem is whenever I try to run the following code:
search_fullarchive(q = "sunset", n = 500, env_name = "AcademicProject", fromDate = "202010200000", toDate = "202010220000", safedir = NULL, parse = TRUE, token = bearer_token)
I get the following error "Error: Not a valid access token". Is that because search_fullarchive() is only for paid premium accounts and that doesn't include the new academic track (even though you get full archive access)?
Also, can you retrieve more than 500 tweets (e.g., n = 6000) when using search_fullarchive()?
Thanks in advance!
I've got the same problem w/ Twitter academic research API. I think if you set n = 100 or just skip the argument, the command will return you 100 tweets. Also, the rtweet package does not (yet) support the academic research API.
Change your code to this:
search_fullarchive(q = "sunset", n = 500, env_name = "AcademicProject", fromDate = "202010200000", toDate = "202010220000", safedir = NULL, parse = TRUE, token = t, env_name = "Your Environment Name attained in the Dev Dashboard")
Also The token must be created like this:
t <- create_token(
app = "App Name",
'Key',
'Secret',
access_token = '',
access_secret = '',
set_renv = TRUE
)

Issue importing transfer journal through x++

I have the following problem: the following code works perfectly:
inventJournalTrans.clear();
inventJournalTrans.initFromInventJournalTable(inventJournalTable);
inventJournalTrans.ItemId = "100836M";
frominventDim.InventLocationId="SD";
frominventDim.wMSLocationId = '11_RECEPTION';
fromInventDim.InventSizeId = '1000';
fromInventDim.inventBatchId = 'ID057828-CN';
ToinventDim.InventLocationId = "SD";
ToInventDim.wMSLocationId = '11_A2';
ToInventDim.InventSizeId = '1000';
ToInventDim.inventBatchId = 'T20/0001/1';
ToinventDim = InventDim::findOrCreate(ToinventDim);
frominventDim = InventDim::findOrCreate(frominventDim);
inventJournalTrans.InventDimId = frominventDim.inventDimId;
inventJournalTrans.initFromInventTable(InventTable::find("100836M"));
inventJournalTrans.Qty = -0.5;
inventJournalTrans.ToInventDimId = ToinventDim.inventDimId;
inventJournalTrans.CostAmount = InventJournalTrans.calcCostAmount(-abs(any2real(strReplace('-0.5',',','.'))));
inventJournalTrans.TransDate = SystemDateget();
inventJournalTrans.insert();
inventJournalCheckPost = InventJournalCheckPost::newJournalCheckPost(JournalCheckpostType::Post,inventJournalTable);
inventJournalCheckPost.parmThrowCheckFailed(_throwserror);
inventJournalCheckPost.parmShowInfoResult(_showinforesult);
inventJournalCheckPost.run();
It creates transfer journal line correctly and posts the transfer journal successfully.
My requirement is to import journal lines from a csv file. I wrote the following code:
inventJournalTrans.clear();
inventJournalTrans.initFromInventJournalTable(inventJournalTable::find(_journalID));
InventJournaltrans.ItemId = conpeek(_filerecord,4);
inventDim_From.InventLocationId = 'SD';
inventDim_From.wMSLocationId = '11_RECEPTION';
InventDim_from.InventSizeId = conpeek(_fileRecord,11);
InventDim_From.inventBatchId = strfmt("%1",conpeek(_fileRecord,5));
InventDim_To.InventLocationId = 'SD';
inventDim_To.wMSLocationId = strfmt("%1",conpeek(_fileRecord,10));
InventDim_To.InventSizeId = conpeek(_fileRecord,11);
InventDim_To.inventBatchId = strfmt("%1",conpeek(_fileRecord,6));
InventDim_From = InventDim::findOrCreate(inventDim_From);
inventDim_To = InventDim::findOrCreate(inventDim_To);
InventJournalTrans.InventDimId = inventDim_From.inventDimId;
InventJournalTrans.initFromInventTable(InventTable::find(conpeek(_filerecord,4)));
inventJournalTrans.Qty = -abs(any2real(strReplace(conpeek(_fileRecord,8),',','.')));
inventJournalTrans.ToInventDimId = inventDim_To.inventDimId;
InventJournalTrans.CostAmount = InventJournalTrans.calcCostAmount(-abs(any2real(strReplace(conpeek(_fileRecord,8),',','.'))));
inventJournalTrans.TransDate = str2date(conpeek(filerecord,9),123);
InventJournalTrans.insert();
I have the following error when I use the insert() method : size does not exists for the itemId. When I have a look in inventSize table for my itemId, the size exists, I thought it was an inventDimId problem in inventJournalTrans but they're strictly similar to the first code exemple. All my datas are the same as the first exemple but are not hard-coded and come from reading my csv file.
I spend a lot of time debugging and found nothing wrong but error message remains
I'm using Dynamics AX V4 SP1.
Thanks a lot for any help.
When you read from files always trim trailing spaces. You can do that using the strRtrim function.
Like so:
InventDim_To.InventSizeId = strRtrim(conpeek(_fileRecord,11));

The New York Times API with R

I'm trying to get articles' information using The New York Times API. The csv file I get doesn't reflect my filter query. For example, I restricted the source to 'The New York Times', but the file I got contains other sources also.
I would like to ask you why the filter query doesn't work.
Here's the code.
if (!require("jsonlite")) install.packages("jsonlite")
library(jsonlite)
api = "apikey"
nytime = function () {
url = paste('http://api.nytimes.com/svc/search/v2/articlesearch.json?',
'&fq=source:',("The New York Times"),'AND type_of_material:',("News"),
'AND persons:',("Trump, Donald J"),
'&begin_date=','20160522&end_date=','20161107&api-key=',api,sep="")
#get the total number of search results
initialsearch = fromJSON(url,flatten = T)
maxPages = round((initialsearch$response$meta$hits / 10)-1)
#try with the max page limit at 10
maxPages = ifelse(maxPages >= 10, 10, maxPages)
#creat a empty data frame
df = data.frame(id=as.numeric(),source=character(),type_of_material=character(),
web_url=character())
#save search results into data frame
for(i in 0:maxPages){
#get the search results of each page
nytSearch = fromJSON(paste0(url, "&page=", i), flatten = T)
temp = data.frame(id=1:nrow(nytSearch$response$docs),
source = nytSearch$response$docs$source,
type_of_material = nytSearch$response$docs$type_of_material,
web_url=nytSearch$response$docs$web_url)
df=rbind(df,temp)
Sys.sleep(5) #sleep for 5 second
}
return(df)
}
dt = nytime()
write.csv(dt, "trump.csv")
Here's the csv file I got.
It seems you need to put the () inside the quotes, not outside. Like this:
url = paste('http://api.nytimes.com/svc/search/v2/articlesearch.json?',
'&fq=source:',"(The New York Times)",'AND type_of_material:',"(News)",
'AND persons:',"(Trump, Donald J)",
'&begin_date=','20160522&end_date=','20161107&api-key=',api,sep="")
https://developer.nytimes.com/docs/articlesearch-product/1/overview

Why more number of duplicated data is saving in my excel sheet for my code?

Actually this code is generally used to scrape data from websites but the problem is more number of duplicated data is producing and saving in my excel sheet.
def extractor():
time.sleep(10)
souptree = html.fromstring(driver.page_source)
tburl = souptree.xpath("//table[contains(#id, 'theDataTable')]//tbody//tr//td[4]//a//#href")
for tbu in tburl:
allurl = []
allurl.append(urllib.parse.urljoin(siteurl, tbu))
for tb in allurl:
get_url = requests.get(tb)
get_soup = html.fromstring(get_url.content)
pattern = re.compile("^\s+|\s*,\s*|\s+$")
name = get_soup.xpath('//td[#headers="contactName"]//text()')
phone = get_soup.xpath('//td[#headers="contactPhone"]//text()')
mail = get_soup.xpath('//td[#headers="contactEmail"]//a//text()')
artitle = get_soup.xpath('//td[#headers="contactEmail"]//a//#href')
artit = ([x for x in pattern.split(str(artitle)) if x][-1])
title = artit[:-2]
for (nam, pho, mai) in zip(name, phone, mail):
fname = nam[9:]
allmails.append(mai)
allnames.append(fname)
allphone.append(pho)
alltitles.append(title)
fullfile = pd.DataFrame({'Names': allnames, 'Mails': allmails, 'Title': alltitles, 'Phone Numbers': allphone})
writer = ExcelWriter('G:\\Sheet_Name.xlsx')
fullfile.to_excel(writer, 'Sheet1', index=False)
writer.save()
print(fname, pho, mai, title, sep='\t')
while True:
time.sleep(10)
extractor()
try:
nextbutton()
except (WebDriverException):
driver.refresh()
except(NoSuchElementException):
time.sleep(10)
driver.quit()
I want the output should not be duplicated but almost half and more number of data are duplicating each time i run the code.

Resources