Error dfx start - using default definition for the "local" shared network - networking

Does anyone know of a way to resolve the dfx start issue without having to lose the latest version of dfx?
I had the following problem when running dfx start:
"Error: using default definition for the "local" shared network because /home/****/.xonfigdfx/network.json does not exist"
The only solution I found was:
Install version 0.9.3 of DFX, previously had the most recent one.
After re-starting the program, I ran the command dfx start --clean
In this way I solved the problem with the startup.
But I missed the DFX version, which is supposed to be the most stable.
Does anyone know a way to resolve this issue and keep the latest version of DFX?

Related

mypyc, KeyError: '__file__'

I successfully use mypyc in my project and it has performed well until just a couple days ago. I now get the following error:
File "mypyc/__main__.py", line 18, in <module>
KeyError: '__file__'
Line 18 above, i.e., the line that is failing, is just
base_path = os.path.join(os.path.dirname(__file__), '..')
which I wouldn't expect to fail. I am in my venv virtualenv when I execute mypyc using the same command as has always worked before.
I thought perhaps a regression was introduced in mypyc so I used git to go back in time to see if that line had changed in any recent version of mypy, but it hadn't.
I also tried downgrading mypy to an older version that worked before but that version also failed with the same error. To be sure it wasn't being experienced by others I also checked the issues at the mypy repo on github and did a search for __file__ to see if that part of the error message showed up and it didn't. Perhaps it is some weird issue with my environment?
I experience the issue with venv virtualenvs created with Python 3.10, 3.10.1 but also 3.9.9 too. It worked fine on Python 3.10 before. Any ideas on what to investigate next?

R session abort when I use assignTaxonomy

I have been having this problem for more than a week now and I am running out of time and patience.This problem occurs when I run my script on a Mac and when I run it on a PC (no difference of results from more RAM, it just aborts faster). When I try to run this line of my dataset, the session aborts.
set.seed(119)
tax_PR2 <- assignTaxonomy(seqtab,
"~/Desktop/Documents/Bruts/aeDNA_data_shared/pr2_version_4.11.1_dada2.fasta",
multithread=TRUE)
Does anyone have any idea of what the problem is? I verified my dataset (seqtab is currently considered by R as a large matrix of 3930724 elements of 20.2Mb), I verified the space I have on my computer, I have all the needed packages to run this line of code and I tried different sources of genome database for PR2 (PR2 version 4.11.1 or 4.12.0 etc...) and it always has the same result.
If you have any ideas I would appreciate them. I hope the information I gave is sufficient.
Packages installed:
library(BiocManager)
library(Rcpp)
library(dada2)
library(ff)
library(ggplot)
library(gridExtra)
library(phyloseq)
library(vegan)
This is probably caused by a bug that was introduced in 1.14, see the Github issue here for more information: https://github.com/benjjneb/dada2/issues/916
We've just identified the cause, and a fix should be out soon. For immediate use, the workaround is to turn off multithreading, or to revert to the previous release 1.12.

Why do I get the following error using pipe operator in R?

How do I resolve the following error:
Error in dots_values(...) : object 'rlang_dots_values' not found
I'm getting this error when executing a statement like this:
dataset <- dataset %>% mutate(col_name1 = ifelse(col_name2 > 201952, 0.875, col_name1))
Note: I have already tried to update and reinstall rlang, rjava, dplyr, magrittr, along with my system's JDK.
Apparently this is mostly caused due to security limitation on your workstation as R packages can only run to packages installed on the following path C:\R\R-[R version]\library.
You can either contact your administrator and change the privileges on your workstation or install packages only to this path C:\R\R-[R version]\library.
Reference: https://community.rstudio.com/t/error-object-rlang-dots-list-not-found/10555/3

Neo4j spatial server plugin fails on withinDistance and closest java.lang.NoClassDefFoundError

I know that the plugin is being loaded properly, as other methods work such as spatial.procedures and spatial.addNode etc.
The error results after a call like this:
CALL spatial.withinDistance('profile_geo', [43.524, 96.7341], 500)
and the error that results is this:
Failed to invoke procedure `spatial.withinDistance`: Caused by: java.lang.NoClassDefFoundError: org/neo4j/cypher/internal/compiler/v3_0/commands/expressions/GeographicPoint
The same error appears when trying to use the closest function as well. Any help would be appreciated.
It looks that you are missing required Jar for GeographicalPoint class.
Please make sure you have this class in your Jars. I know this class exists in neo4j-cypher-compiler-3.0-3.0.3.jar but it is not going to work for you as it is residing in different namespace. If you will not be able to pinpoint the corresponding Jar in your environment please have a look at Maven repository and try to find it there.
short answer: upgrade to 3.0.3 (both neo4j and plugin)
long answer:
I ran into the exact same issue today. I was running version 3.0.2 with server plugin version 3.0.2 and ran the cypher query:
CALL spatial.withinDistance("spatial_records",{lon:20.0,lat:50.0},100000000)
Joran mentioned in the comments above that the REST API was a working alternative. So I tried that out and found he was indeed correct.
I tested this using httpie, with the following command:
cat tmp.json | http :7474/db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance
where tmp.json looks like:
{"layer" : "spatial_records","pointX" :3.9706,"pointY" : 46.7907,"distanceInKm" :10000000000}
While this works, using CYPHER with stored procedures would be nice. So upon further investigation, I noticed that a recent commit contained the following changes:
- <neo4j.version>3.0.1</neo4j.version>
+ <neo4j.version>3.0.3</neo4j.version>
...
-import org.neo4j.cypher.internal.compiler.v3_0.commands.expressions.GeographicPoint;
+import org.neo4j.cypher.internal.compiler.v3_0.GeographicPoint;
So I ended up downloading version 3.0.3 of both neo4j and the spatial plugin. Whatever the issue was before, seems to be fixed in this version. The call to the stored procedure now works as expected!

Error in fetch(key) : lazy-load database

I don't know what is going on, everything was working great but suddenly I started to have this error message on the documentation:
Error in fetch(key) : lazy-load database '......descopl.rdb' is
corrupt
I removed almost all my code and build again then publish to Github, but when I use the other laptop to download the package, the package is being downloaded and loaded but I can't call any of the functions, and the documentation states that error.
I don't know what caused the problem, I am using roxygen to generate the documentation.
https://github.com/WilliamKinaan/descopl
It seems that the error arises when the package cannot be decompressed by R (as #rawr established, it is corrupt). This solutions have worked for me:
1) Check for possible errors in the creation of the .Rdb files
2) Try restarting your R session (e.g. .rs.restartR() if in RStudio)
3) The package might have been installed in your computer (even though it does not work). Remove it using ?remove.packages()
I have had this problem with roxygen2 as well. Couldn't see any problem with any of my functions. In the end deleting the .rdb file and then getting roxygen2 to rebuild it seemed to solve the problem.
I think the explanation for what is causing this is here.
It's related to devtools.
Per #Zfunk
cd ~/Rlibs/descopl/help
rm *.rdb
Restart R. Look at the help for the package again. Fixed!
I received this error after re-installing a library whilst another R session was running.
Simply restarting the existing R session(s) solved for me (i.e. running .rs.restartR() to restart the sessions)
If you are using R-studio:
1) ctrl+shift+f10 to restart r session
2) tools -> Check for package updates -> update all packages
3) library(ggmap)
Problem is solved.
Basically all answers require restarting R to resolve the issue, but I found myself in an environment where I really didn't want to restart R.
I am posting here a somewhat hack-ish solution suggested by Jim Hester in a bug report about the lazy-load corruption issue.
The gist of it is that the package may have some vestigial S3 methods listed in session's .__S3MethodsTable__. environment. I don't have a very systematic way of identifying which S3 methods in that environment come from where, but I think a good place to start is the print methods, and looking for S3method registrations in the package's NAMESPACE.
You can then remove those S3 methods from the .__S3MethodsTable__. environment and try again, e.g.
rm(list="print.object", envir = get(".__S3MethodsTable__.", envir = baseenv()))
You may also need to unload some DLLs if some new messages come up like
no such symbol glue_ in package /usr/local/lib/R/site-library/glue/libs/glue.so
You can check getLoadedDLLs() to see which such files are loaded in your session. In the case of glue here, the following resolved the issue:
library.dynam.unload('glue', '/usr/local/lib/R/site-library/glue')
I got this error on RStudio on mac OS - updating all the packages and restarting r session did the trick.

Resources