I have spent the last few months on an important research project. I checked in on it again today after not touching it for about 2 weeks. The workspace is how it was around 3 months ago, and it does not have any of my most important data frames, tables, etc. The history, if I search for a command I used to assign the data (e.g. df="blah"), it is still there. How can I recover my workspace? It didn't have any error messages. My computer died a few times, but I don't know if that would impact it as I had it saved more recently than 3 months ago.
If I can't recover the workspace, how can I view the full history? I have tried saving the history as a .txt file, but it only shows the history from 3 months ago. The window also only shows this shortened history. However, again, if I actually search the window, it shows commands more recent than that. I don't know how to fix this.
Any help is greatly appreciated.
Related
novice R user here. I'm looking to scrape a large amount of data on daily streaming volumes on songs that are on Spotify's Top 200 charts for a research project I am involved with. Basically, I would like to write a script to scrape all info for tracks in the top 200 on a given day, such as today's chart, and have this done for every day for a number of years, across a number of countries. I used some code from a guide that I followed previously to successfully scrape said data, but it is now not working for me.
I previously followed this guide pretty much word for word. While this originally worked, it now returns an empty tibble. I suspect that the problem may have to do with the fact that Spotify have re-developed their charts site since my last attempt. The site is different in appearance, but importantly the html node names appear to be different as well. My hunch is that this is what is causing the issue.
However, I am not at all sure if this is the case. Would appreciate it greatly if I could have some guidance on what I would need to do differently to achieve my aims, and whether it is indeed still possible to scrape these charts.
Cheers
There doesn't seem to be many resources available for troubleshooting Google Optimize. I've been using the software for a year now with no issues. Optimize experiments update with new session data once or twice a day. As of recently, every time new data comes through, Optimize will zero out the session data.
The session data here looks correct until I click into the experiment:
When I click into an experiment, you'll see the Experiment Sessions don't add up correctly. The experiment sessions were being tracked correctly last night, and they correctly added up to the total Collected Sessions.
Everything is tracking correctly until Google pulls in more data. I'm not sure what's causing this and I can't get support from their community or support team. Data was collecting fine a week ago. I haven't changed my experiment goals or how the tests are running. The only thing I've changed in that time is how many tests I'm running at the same time. I used to only run 1 test at a time, and I'm now running 3. None of the tested pages, or primary goals overlap. I do have some secondary goals that overlap. Any thoughts?
It looks like this was happening as a result of Google Analytics Goal Events being delayed. It can take 24-48 hours for website event data to come through.
So I am a gamer who loves editing videos and creating montages. I play a game called Apex Legends, and find that when editing long videos (Up to 5 hours long) I follow the same procedure every time. I skip ahead in the video until I see when I get a kill (Displayed in the top right of the screen as seen in picture below).
I then pause the video and begin the trimming process.
So my question is; Is there anything out there that can "read" videos? Assuming there is, is it possible to have a code that can read the screen, identify a change in kills, then trim say 5 seconds before and after that change? I am not asking anyone to write this code, but simply want to know if its possible
(Particularly with Python as that is my strongest language). Thank you all for your time and insight.
The app works fine locally and is basically just for plotting data - very little to no math/processing is going on. The Shiny input fields just filter a data.frame by date and a handful of other attributes like location, observation type, etc. -- for daily numerical values originating in .csv that go back to Oct '16 that amount to a seemingly manageable ~0.25 gb. The default begin and end dates for the filtering are both in April '19, and the online app filters (by filter()) and plots that about month of data correctly on start up, and works fine looking back several days or weeks too.
However, if I extend the start date back by multiple months or years (say for example 10/15/16 to now instead of just the past month), the io version pretty abruptly yields "disconnected from the server". It seems I need to relax a setting or devote more resources to something else, but just a guess.. any ideas?
If feeling extremely helpful or curious - here's the working app in one ~40 mb zipped folder along with the app.r, and all the data, and the libraries used specified in libs.r
timeseries app
in summary the app works fine locally, but only for short time series ranges in io. The full data set goes back to 2004 for ~2gb. Ideally I'd like to find a way to have all those years queryable from the io app with decent app performance.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We've just set up a Rational Team Concert v3 system. The data was loaded on Friday, but there was an issue connecting to the report data warehouses that was not fixed until today (Monday). We've fixed it, and the data load operations seem to be finishing correctly now.
I'm desperately eager to see a burndown chart - even though I know that in 24 hours we won't really have enough data to make it useful. I'm also eager to see just about any report from the RTC server, as we want to be able to share as much information as possible with the customer, and this is a trial for RTC as a large team tool.
How long should one expect it to take before RTC is able to show reports relating to work items? We've already cached several data updates - but only within the last few hours.
Should we wait 24 hours? 48? should it show up immediately? Haven't found any good heuristics for this on the Rational site.
You need a few things to happen to get a decent burn down chart in RTC
Run the Data Warehouse job (this happens every 24 hours automatically, or you can trigger it manually from the Reports page in the Admin section.
Get some work done - complete tasks, set Stories to Completed, etc. The burn down is a graph over time of work done.
You should see progress on the chart after the two event above occur.
Another thing to check - is that specific burn down chart set to point to the right project and team?
If that does not work - you may want to raise the question with IBM support (sounds like something is wrong, or raise the question on the RTC forum on jazz.net
Closure - It turns out we had several problems. Problems included:
- incorrect account setup on the account syncing between RTC and the data warehouse - we had to both make a new account and setup more privileges for it.
- a truly messed up set of sprints. I don't know what went wrong with the Sprints that were first set up (by default!) with the project, but they did not ever sync properly. Moving tasks to a newly made sprint caused the tasks to show up properly in reports (after a sync), but the original sprints were simply broken. Eventual workaround - make new sprints, same dates, and move all assigned stories/tasks to them.
The final answer was - the data should show up instantly after a sync. If you think your sync shows new data and you don't see a change in your report, then you have a problem.
Other notes - the data in the selection fields in "edited" reports is based on the data in the warehouse. If you don't see a sprint or release in there, it means that the report search criteria is not showing that there is data in the column that you are looking for. Report business logic seems to vary by report - in some cases, not being able to select a sprint (or not having a sprint in the data that matches the "current iteration") - will cause empty reports.