Has anyone encountered an issue like this before:
I've created a very basic map plotting 39 places in England using Longitude and Latitude points.
I have a table that accompanies the map, and when you select one of the 39 places on the map, I want this to update the table to show only values for this selection.
I've set up the map as a filter but for some reason, only 7 of the 39 work when selected.
I decided to extract the data and save the file as a packaged workbook, to send to a colleague to investigate. However, when I extract the data and save as a packaged workbook and open in Reader or Desktop, everything works as it should do. All 39 places on the map filter other sheets correctly.
So I guess my question is, why would my original desktop version not work, but by saving it/extracting the data, it works perfectly?
For additional information, I am connecting to a live SQL server data source.
Edit: Added screenshots. I don't have this problem when I extract data and save as a packaged workbook. All the filters work.
I have finally solved this one.
I was clicking on the actual map to 'Use as Filter'. What I have done instead is, go to Dashboard > Actions and Add Action making sure that the Target Filters are set to Selected Fields and not All Fields.
I think I'll avoid the 'Use as Filter' option from now on as that was a massive pain. It still makes no sense that it worked when extracted/saved as a packaged workbook, but then Tableau has lots of these little issues when you start using it more and more.
Related
I'm currently working on an R Shiny App that utilizes googlesheets4 to pull in a large dataset from GoogleSheets upon app launch. Loading in this dataset to my app takes ~2 minutes, which stalls my entire app's load time.
The only visual in my app is based on this GoogleSheets data, so it is very dependent on this specific dataset. Once the dataset gets pulled into my app, it is filter and therefore becomes much smaller (85,000 rows ---> 1,000 rows). This GoogleSheet data is updated every day, so I don't have the luxury of pre-downloading it once and storing it as a .csv forever.
There are two different fixes for this that I have tried but have been unsuccessful...curious if anyone has any thoughts.
Have a separate app running. My first idea was to create a separate Shiny app entirely, that would have a sole purpose of pulling the GoogleSheets df once a day. Once it pulls it, it would conduct the necessary data cleaning to get it down to ~1,000 rows, and then would push the smaller df to a different GoogleSheet link. Then, my original app with the visual would just always reference that new GoogleSheet (which would take much less time to load in).
The problem I ran into here is that I couldn't figure out how to write a new GoogleSheets doc using googlesheets4. If anyone has any idea how to do that it would be much appreciated.
Temporarily delay the load in of the GoogleSheets data, and let visual populate first. My second idea was to have the code that pulls in the GoogleSheets df be delayed upon launch, letting my visual first populate (using old data) and then have the GoogleSheets pull happen. Once the pull is complete, have the visual re-populate with the updated data.
I couldn't figure out the best/right way to make this happen. I tried messing around with sleep.sys() and futures/promises but couldn't get things to work correctly.
Curious if anyone has any thoughts on my 2 different approaches, or if there's a better approach I'm just not considering...
Thanks!
There is a function called write_sheet that allows you to write data to a google sheet. Does that work for you?
googlesheets4::write_sheet(data = your_data,
ss = spread_sheet_identifier,
sheet = "name_of_sheet_to_write_in")
If you on only want to add something without deleting everything in the sheet, the function is sheet_append
googlesheets4::sheet_append(data = your_data,
ss = spread_sheet_identifier,
sheet = "name_of_sheet_to_write_in")
Not sure you can store the credentials in a save way, but couldn't you use github actions? Or alternatively a cron job on your local computer?
I have two reports made in flexdashboards inside the same project.
The only difference is the data source and data wrangling is the same for both. Object names are same because code is really a copy (the only different lines are for data load, filtered by country).
Every chunk has this option to force fresh data always.
r cache=FALSE
Accessing my flexdashboard
Germany.Rmd
renders ok
But if I open the other flexdashboard
France.Rmd
the data and charts are from Germany.
I suppose the objects (df, plot..) having the same name are conflicting even with no cache.
Is there any way of solving this issue to avoid creating a project for every country?
I have a Microsoft Access 2010 database program that let's users update certain information within based on permissions. There are 2 versions, one for each facility. I recently added a new form to the 2nd facility by copy/pasting the form from the 1st file and then adding the necessary queries.
Everything worked great except for one column that contains a RowNumber formula for displaying the, you guessed it, row number. This was working in the 1st file and I have made sure that every property matches exactly, but the image below is what I get when I try to open the form. As if to add more chaos to the mix, the 1st file started showing the same results as well, despite working perfectly fine before and me not even touching that specific text box.
I have Googled this issue but have not seen anything with this specific result. Can anyone explain what this means?
Column in question (where formula is located):
Name: Text57
Control Source: =RowNum([Form])
Visible: No
Datasheet Capt: Seq
Column in question:
Name: Label58
Caption: Text57
Visible: Yes
Row numbering on the fly normally is completely academic in Access as it has no meaning. The row number on a form has no relationship to anything in the data, and can change if the underlying data changes in any way.
The RowNum() function is not inbuilt in access as far as I can tell, so I'm not sure how it could work.
I currently have an Access database which pulls data from an Oracle for various countries and is currently around 1.3 GB. However, more countries and dimensions should be added, which will further increase its size. My estimate would be around 2 GB, hence the title.
Per country, there is one table. These tables are then linked in a second Access db, where the user can select a country through a form which pulls the data from the respective linked table in Access db1, aggregates it by month, and writes it into a table. This table is then queried from Excel and some graphs are displayed.
Also, there is another form where the user can select certain keys, such as business area, and split the data by this. This can be pulled into a second sheet in the same excel file.
The users would not only want to be able to filter and group by more keys, but also be able to customize the period of time for which data is displayed more freely, such as from day xx to day yy aggregated by week or month (currently, only month-wise starting from the first of each month is supported).
Since the current Access-Access-Excel solution seems to me to be quite a cumbersome one, I was wondering whether I might make the extensions required by the users of this report using R and either shiny or markdown. I know that shiny does not allow files larger than 30MB to be uploaded but I would plan on using it offline. I was just not able to find a file size limit for this - or do the same restrictions apply?
I know some R and I think that the data aggregations needed could be done very fast using dplyr. The problem is that the users do not, so the report needs to be highly customizable while requiring no technical knowledge. Since I have no preexisting knowledge of shiny or markdown, I was wondering whether it was worth going through the trouble of learning one enough to implement this report in them.
Would what I want to do be feasible in shiny or R markdown? If so, would it still load quickly enough to be usable?
I'm writing a simple Wordpress plugin for work and am wondering if using the Transients API is practical in this case, or if I should seek out another way.
The plugin's purpose is simple. I'm making a call to USZip Web Service (http://www.webservicex.net/uszip.asmx?op=GetInfoByZIP) to retrieve data. Our sales team is using a Lead Intake sheet that the plugin will run on.
I wanted to reduce the number of API calls, so I thought of setting a transient for each zip code as the key and store the incoming data (city and zip). If the corresponding data for a given zip code already exists, then no need to make an API call.
Here are my concerns:
1. After a quick search, I realized that the transient data is stored in the wp_options table and storing the data would balloon that table in no time. Would this cause a significance performance issue if the db becomes huge?
2. Is this horrible practice to create this many transient keys? It could easily becomes thousands in a few months time.
If using Transient is not the best way, could you please help point me in the right direction? Thanks!
P.S. I opted for the Transients API vs the Options API. I know zip codes don't change often, but they sometimes so. I set expiration time of 3 months.
A less-inflated solution would be:
Store a single option called uszip with a serialized array inside the option
Grab the entire array each time and simply check if the zip code exists
If it doesn't exist, grab the data and save the whole transient again
You should make sure you don't hit the upper bounds of a serialized array in this table (9,000 elements) considering 43,000 zip codes exist in the US. However, you will most likely have a very localized subset of zip codes.