write_csv() produces bank cells in msexcel - r

This isn't a major issue, but I still thought I would ask.
I've been cleaning some data for a project at work, and there's a point at the process where I save all of the individual files I've cleaned as a CSV in long format. I noticed that with some of the files that if I open them, some cells that SHOULD have data appear blank. If I use the "Clear All Formats" option, the data appears. It reads into R just fine and it hasn't caused any issues, but I still think it's weird.
Has anyone else run into this and if so, was there a way to resolve this without going through each column? The files I'm cleaning start out with all sorts of formatting, so I'm curious if that could be the cause. I thought that a CSV doesn't save formats though, so I'm a little confused.
Again, not the biggest deal but slightly annoying and I'll get questions about it if my colleagues ever take a look at these files.
The data is prorietary, and I'm not exactly sure how I would share it. but I'm using a pretty stragith forward write_csv(data,"path.csv")

I think I figured out the solution to this issue, and I wanted to share in case anyone else runs into this.
I'm using a Windows Computer, which needed an update. That got me thinking and I needed to update my version of RStudio. I'm not sure what would have caused this issue, but when I re-run those files, the issue appears to be resolved.

Related

Trying to find a good way to convert HTML to PDF

how are you. For a while I've been working for a Gynecologist building her a data base. For the project I am using Firebase and JavaScript. The database is for her to keep track of their patients and she keeps reports on each one of them. I am almost done with the job, the UI is almost finished, the core functionalities of the database (save data, delete, retreive, and update) are up and running but I am stuck in one little thing. She asked me for a way to turn those reports she keeps in the database into a format like PDF so she can print them and give them in case needed to her patients. The thing is that Ive tried with html2pdf, a git repository that works kind of clunky, and tried looking for others but I still cant find one that works correctly. So I wanted to ask you guys if you know of some alternatives. I started thinking about using EXCEl or Word document. But either way it seems quite complicated. Thank you for your time.
Best to all.

HTK ERROR [+5010] InitSource: Cannot open source file f-ihm+k

I believe that this error has something to do with a mismatch between my tiedlist and the hmmdefs (as pointed out here:http://www.ling.ohio-state.edu/~bromberg/htk_problems.html), but I can not seem to solve it. All of the triphones in my corpus are present in my triphones1 list and triphones1 only contains monophones,biphones and triphones from my corpus.
If I take said triphone out of the triphones1 list and recreate the tiedlist it passes but complains about another triphone down the road. Obviously manually taking out all of these triphones would take me years and it doesn't seem efficient which leads me to believe that I have missed something further back.
It is also important to note that all these triphones generating errors are in my corpus as well. To me this error would only make sense if I had unseen triphones somewhere, but where? I feel that I have left no stone unturned but surely someone can give me a fresh idea of where to look.
There was an extra AU command at the end of the tree.hed file This was causing it to try and open another file after the tiedlist. I am not sure why this causes an issue when it has already accessed tiedlist, but there you go.
Hopefully this serves as a extra check for future htk users.

R knitr: is it possible to use cached results across different machines?

Issue solved, see answers for details.
I would like to run some code (with knitr) on a more powerful server and then maybe have the possibility of making small changes on my own laptop. Even copying across the entire folder, it seems that the cache is rebuilt when re-compiling locally, is there a way to avoid that and actually use the results in the cache?
Update: the problem arose from different versions of knitr on different machines.
In theory, yes -- if you do not change anything, the cache will be kept. In practice, you have to check carefully what the "small changes" are. The documentation page for cache has explained when the cache will be rebuilt, and you need to check if all three conditions are met.
I wonder if in addition to #Yihui's answer if the process of copying from one machine to another changes the datetimes on the files so that they look out of date even when nothing has changed.
Look at the dates on the files involved after copying. If you can figure out which files need to be newer than others then touching them may prevent the rebuilding.
Another option would be to just paste in the chached pieces directly so that they are not rerun (though that means you have to rerun and repaste manually if you change anything in those parts).

Subversion: "svn update" loses CSS data

Recently, I've noticed strange behavior by Subversion. Occasionally, and seemingly randomly, the "svn up" command will wreak havoc on my CSS files. 99% of the time it works fine, but when it goes bad, it's pretty damn terrible.
Instead of noting a conflict as it should, Subversion appears to be trashing all incoming conflict lines and reporting a successful merge. This results in massively inconvenient manual merges because the incoming changes effectively disappear unless they're manually placed back into the file.
I would have believed this was a case of user error, but I just watched it happen. We have two designers that frequently work on the same CSS files, but both are familiar and proficient with conflict resolution.
As near as can figure, this happens when both designers have a large number of changes to check in and one beats the other to the punch. Is it possible that this is somehow confusing SVN's merging algorithm?
Any experience or helpful anecdotes dealing with this type of behavior from SVN are welcome.
If you can find a diff/merge program that's better at detecting the minimal changes in files of this structure, use the -diff-cmd option to svn update to invoke it.
It may be tedious but you can check the changes in the CSS file by using
svn diff -r 100:101 filename/url
for example and stepping back from your HEAD revision. This should show what changes were made , at what revision and by whom. It sounds like a merging issue I've had before but unfortunately I found myself resolving it by looking at previous revisions and merging them manually too.

Copying and pasting between two separate programs using Automator in Mac OS X

Ok, so I have an excel spreadsheet that contains data that I would like to copy directly into an SQLite db using Menial Base, a db editor. I have tried a number of different methods such as trying to convert from .csv and .txt extensions, and nothing is working the way I need it to, so I am now resorting to Automator. From what I understand, Automator is a very powerful application, I just don't have any clue how to get it to do what I need it to, or if it's even capable of doing what I need it to. All I need it to do is copy a cell from excel, command-tab over to Base, and paste it into a cell, go back to Excel, press down and copy the next value, and then go back over into Base, press down, and paste. Then repeat and repeat and repeat a thousand times. Its not overly complicated, but I was wondering if anyone out there knows if this sort of thing is possible in automator. Would I maybe need to write my own AppleScript or something? Any thoughts or insights would be greatly appreciated! Thanks
I don't know exactly how you might go about doing the copy and paste function, but it looks as though using an AppleScript is going to be your best bet. See here, Hope that helps

Resources