When I try to open my .ipynb, I get the following error. Other notebooks in the same directory work just fine.
UI Error:
Close without saving?
File "cloner.ipynb" has unsaved changes, close without saving?Close without saving?
File "cloner.ipynb" has unsaved changes, close without saving?
Browser console errors:
serverconnection.js:192 GET http://localhost:8890/api/contents/cloner.ipynb?type=notebook&content=1&1586015957921 400 (Bad Request)
handleRequest # serverconnection.js:192
makeRequest # serverconnection.js:76
get # index.js:477
get # index.js:170
(anonymous) # context.js:498
Promise.then (async)
_revert # context.js:497
initialize # context.js:190
(anonymous) # manager.js:445
Promise.then (async)
_createOrOpenDocument # manager.js:445
open # manager.js:274
openOrReveal # manager.js:298
_handleOpen # listing.js:824
_evtDblClick # listing.js:900
handleEvent # listing.js:543
context.js:582 Uncaught (in promise) TypeError: Failed to execute 'text' on 'Response': body stream is locked
at Context._handleError (context.js:582)
at context.js:540
_handleError # context.js:582
(anonymous) # context.js:540
async function (async)
_handleError # context.js:582
(anonymous) # context.js:540
async function (async)
(anonymous) # context.js:536
Promise.catch (async)
_revert # context.js:533
initialize # context.js:190
(anonymous) # manager.js:445
Promise.then (async)
_createOrOpenDocument # manager.js:445
open # manager.js:274
openOrReveal # manager.js:298
_handleOpen # listing.js:824
_evtDblClick # listing.js:900
handleEvent # listing.js:543
Pip-managed package versions:
Name: jupyter
Version: 1.0.0
Summary: Jupyter metapackage. Install all the Jupyter components in one go.
Name: jupyterlab
Version: 2.0.1
Summary: The JupyterLab notebook server extension.
I tried deleting my .ipynb_checkpoints folder, and am tracking all this stuff in git.
Looking at the Jupyter logs, I can see that this error arrises because the nbsignatures.db database is locked. https://www.onooks.com/jupyter-fails-to-open-notebook-with-error-file-xx-has-unsaved-changes-close-without-saving/
Looks like a git merge conflict added non-json lines to my file.
Looking inside cloner.ipynb source file w text editor:
<<<<<<< HEAD
},
{
<CONTENTS OF MY CELL>
]
=======
>>>>>>> a23f8f8f9db0974b7de90c6e7ed8599fa04d53cc
Deleted those non-json lines and am back in business.
In my case, jupytext kept me from opening ipynb file becuase I edited ipynb file with another editor and not corresponding python file.
So I excuted rm CORRESPONDING.py, then I could open the ipynb file.
I got the same message, and after a lot of trial and error,
https://github.com/jupyterlab/jupyterlab/issues/3706
gave the correct hint for my case:
The reason was a line ending thing. When editing the notebook in an external editor, I had left a superfluous comma in the last source block line of a code cell, i.e. \n", instead of correct \n".
Related
I have null-ls setup with nvim and have a mypy diagnostic in my sources that I run on save
null_ls.builtins.diagnostics.mypy.with({
method = null_ls.methods.DIAGNOSTICS_ON_SAVE}),
But as soon as I do, for example, nvim main.py, I get this error
[null-ls] failed to run generator: ...t/null-ls.nvim/lua/null-ls/helpers/generat
or_factory.lua:218: error in generator output: mypy: can't read file '/Users/me/main.py': No such file or directory
The error disappears if I save the file and go back inside it.
Is the diagnostic running as soon as I open the file, even though I have it on save?
btw, I don't get this error at all with black or flake8 which I also have configured in sources.
The following frequently fails in Rstudio but I can't figure out exactly when/why...
reticulate::use_virtualenv("myrpyenv")
pa<-reticulate::import("pyarrow") # this fails
pq<-reticulate::import("pyarrow.parquet") # this fails
pc<-reticulate::import("pyarrow.compute") # this fails
ds<-reticulate::import("pyarrow.dataset") # this fails
adlfs<-reticulate::import("adlfs")$AzureBlobFileSystem #this works
abfs<-adlfs(connection_string=Sys.getenv("Synblob")) #this works
The failure is Error in py_module_import(module, convert = convert) : ImportError: DLL load failed while importing lib: The specified procedure could not be found.
Clearing the environment and Restarting R is insufficient to make it work. I need to spawn new session to make it work. It always works in Rgui (as opposed to Rstudio).
I've unchecked Restore .RData into workspace at startup and changed Save workspace to .RData on exit to Never.
There seems to be something that is hanging around that breaks it but I can't figure out what.
I am using tryCatchLog and want to send all warnings and error messages to a log file. I do not want any output to the console.
In the documentation for tryCatchLog I came across this code snippet:
library(futile.logger)
# log to a file (not the console which is the default target of futile.logger).
# You could also redirect console output into a file if start your R > script with a shell script using Rscript!
flog.appender(appender.file("my_app.log"))
The vignette for tryCatchLog includes the following code snippet to change the logging behavior and send errors to a file instead of the console:
library(futile.logger)
flog.appender(appender.file("app.log"))
flog.threshold(ERROR) # TRACE, DEBUG, INFO, WARN, ERROR, FATAL
try(log(-1)) # the warning will not be logged!
This suggests to me that I can simply re-direct the messages to a log file using flog.apppender(appender.file()). However, instead of writing to a file I get the following output on the console:
NULL
NULL
Warning in log(-1) : NaNs produced
[1] NaN
The vignette for tryCatchLog provides this code example in the Best Practice section:
library(futile.logger)
library(tryCatchLog)
options(keep.source = TRUE) # source code file name and line number tracking
options("tryCatchLog.write.error.dump.file" = TRUE) # dump for post-mortem analysis
flog.appender(appender.file("my_app.log")) # to log into a file instead of console
flog.threshold(INFO) # TRACE, DEBUG, INFO, WARN, ERROR, FATAL
tryCatchLog(source("your_main_script.R"))
Adapting the last line of code to tryCatchLog(log("this will produce an error")) to make the example easier, I am still not writing to a log file but to the console:
NULL
NULL
Error in log("this will produce an error") : non-numeric argument to mathematical function
Looking at the documentation of futile.logger and examples on stackoverflow also did not help me. Based on them, I thought the following should write the error message to a file. I am using try() as a stand-in for more involved versions of tryCatchLog() to make sure that it is not an issue with tryCatchLog.
library(futile.logger)
flog.appender(appender.file(file.path(getwd(),'logs.txt')))
flog.threshold(WARN) # TRACE, DEBUG, INFO, WARN, ERROR, FATAL
try(log(-1)) # this will create a warning
try(log("this will create an error")) # this will create an error
The command neither creates a log file nor appends an existing one.
Instead, flog.appender() and flog.threshold() return NULL (to the console). And the warning and error message are also printed to the console. Presumably, I am missing something when linking the logger to the file (hence the NULL return value?).
How can I redirect all warnings and errors caught by tryCatchLog to a file (with futile.logger) without ANY output to the console?
The output of try() has nothing to do with futile.logger().
If you configure a file appender in futile.logger you still have to write your log output using the flog.* functions (that's how logging works):
library(futile.logger)
flog.info("this is an info log output")
# INFO [2021-12-08 19:51:13] this is an info log output
flog.warn("this is an warning log output")
# WARN [2021-12-08 19:51:14] this is an warning log output
flog.error("this is an error log output")
# ERROR [2021-12-08 19:51:14] this is an error log output
I guess what you want is kinda output redirection of the standard or error output (eg. via capture.output) but this function does not add logging information like a timestamp and does not support severity level management ("info", "warn", "error"...).
If you want to use try*() in combination with logging you can use the CRAN package tryCatchLog which does exactly what you want (and supports futile.logger too):
library(tryCatchLog)
tryLog(log(-1)) # this will create a warning
tryLog(log("this will create an error")) # this will create an error
I have an internal repository on Gitlab.
In my R script I want to source an .R file from that internal repository.
Normally I can source a publicly available R script with the following code
source("public-path-to-file")
But When I try to do that with a private file I get:
Error in source("") :
:1:1: unexpected '<'
1: <
^
I found a hacky way to do it that at least gets things done:
First you need to create a private token with api access. Then you can call gitlab's API directly to GET the file
Code:
cmd <- "curl -s --header 'PRIVATE-TOKEN: <your private token for gitlab with api access>' '<full gitlab link of the raw file that you want to source>'" # you directly access the API with your private token
output <- system(cmd, intern=TRUE) # capturing the output of the system call
writeLines(output, "functions.R") # creating a temporary file which contains the output of the API call
source("functions.R") # Sourcing the file the usual way
file.remove("functions.R") # removing the temporary file for housekeeping
Explanation:
We call the API directly with a GET request using the system command curl.
Then you call the system command through R, capturing the result.
And then you can create a temporary file with the respective contents and source it as you would normally do. Hope this may help someone
I'm running a small Julia program using PyPlot in Juliabox (IJulia Notebook) but it's errors out with an error mesg as listed below. I not sure if it's trying to use my machine's disk to write to but I have valid R+W access there. Basically I'm trying out the examples as mentioned here: https://www.juliabox.org/notebooks/tutorial/Plotting%20in%20Julia.ipynb#
LoadError: unlink: read-only file system (EROFS)
Pkg.add("PyPlot")
using PyPlot
for i = 1.0:300.0
for j = 1.0+i:250.0, k=1.0:10
plot(i+j, i*k/j, color="red", linewidth=1.0, linestyle="--")
i += 0.1
j += 0.05
k += 0.01
end
end
Error log:
INFO: Nothing to be done
INFO: Precompiling module PyPlot...
INFO: Recompiling stale cache file /opt/julia_packages/.julia/lib/v0.4/Compat.ji for module Compat.
ERROR: LoadError: unlink: read-only file system (EROFS)
in unlink at fs.jl:102
in rm at file.jl:59
in create_expr_cache at loading.jl:330
in recompile_stale at loading.jl:461
in _require_from_serialized at loading.jl:83
in _require_from_serialized at ./loading.jl:109
in require at ./loading.jl:219
in include at ./boot.jl:261
in include_from_node1 at ./loading.jl:304
[inlined code] from none:2
in anonymous at no file:0
in process_options at ./client.jl:257
in _start at ./client.jl:378
while loading /home/juser/.julia/v0.4/PyCall/src/PyCall.jl, in expression starting on line 26
ERROR: LoadError: Failed to precompile PyCall to /home/juser/.julia/lib/v0.4/PyCall.ji
in error at ./error.jl:21
in compilecache at loading.jl:384
in require at ./loading.jl:224
in include at ./boot.jl:261
in include_from_node1 at ./loading.jl:304
[inlined code] from none:2
in anonymous at no file:0
in process_options at ./client.jl:257
in _start at ./client.jl:378
while loading /home/juser/.julia/v0.4/PyPlot/src/PyPlot.jl, in expression starting on line 5
LoadError: Failed to precompile PyPlot to /home/juser/.julia/lib/v0.4/PyPlot.ji
while loading In[10], in expression starting on line 2
in error at ./error.jl:21
in compilecache at loading.jl:384
in require at ./loading.jl:250
If I use 0.3.12 version (IJulia Notebook), then it compiles and shows INFO: Nothing to be done but doesn't show anything as output (some graphics plot diagram etc).
Thanks to ali_m. Here's the main summary on what that post says.
The problem seems to be that JuliaBox ships some precompiled cache files in a read-only directory /opt/julia_packages/.julia/lib/v0.4. If at some point it detects that the cache is stale and tries to recompile it, it fails.
This needs to be fixed in Julia itself—it shouldn't try to delete cache files from a read-only directory when recompiling.
Issue link is https://github.com/JuliaLang/julia/issues/14368
To resolve it only in 0.4.2, one can use (this will remove the 3rd index value from Base.LOAD_CACHE_PATH array/set/tuple) to your .juliarc file on JuliaBox to remove the read-only cache directory from the search path. Or just run this manually before typing using Compat etc to rebuild the cache without using the read-only search path.
splice!(Base.LOAD_CACHE_PATH, 3)
A nice example of splice! function
(PS: A function in Julia which ends with ! with its names means, that function will not only do its work but also will change its arguments data/value).
# Remove elements from an array by index with splice!
arr = [3,4,5]
splice!(arr,2) # => 4 ; arr is now [3,5]
The suggested workaround works for 0.4.2 (using echo 'splice!(Base.LOAD_CACHE_PATH, 3)' > ~/.juliarc.jl to insert the line into juliarc) but apparently LOAD_CACHE_PATH isn't defined while launching Julia 0.3.12 so this would fail there.
Adding the following line in the same file fixed this issue (adding a condition to work when version in Julia is 0.4 or above). I didn't see this issue in the 0.5 development version so we are good there.
VERSION >= v"0.4" && splice!(Base.LOAD_CACHE_PATH, 3)