Win 7 64 slow file access - aptana

I have all my projects that resides on network drive (e.g. H:)
with win xp everything worked fine, after moving to win7 64 every I/O operation has become 10 times slow.
Refreshing workspace activity takes several minutes.
Any suggestion?

Related

foreach..dopar runs significantly slower on Docker vs local laptop

I am currently running into a weird issue that the foreach...dopar runs like 10x faster in my local laptop (Dell laptop, Windows 10 OS) with 15 cores than when the R code was put onto Docker container containing 8 cores. The code itself only sets the ncores parameter to be 3, so I am puzzled by why there is such a drastic difference in runtime like that. Did anyone here run into the similar issue with the doParallel package in Docker? If yes, how did you resolve it?

Docker on Windows 11 and WSL2 super slow with wordpress page

Fresh install of windows 11 and docker. i7 and 16GB. I am developing a wordpress site. The wordpresssite is super slow when running with docker. Opening a page takes like 10 seconds.
I already tried creating a .wslconfig file with the following content
[wsl2]
memory=6000MB #Limits VM memory in WSL 2 to 6000MB
processors=4 #Makes the WSL 2 VM use four virtual processors
Same result, although vmmem now uses less memory. Super slow.
What else can I do?
Would turning WSL2 off be helpful? Shouldn't WSL2 be better performance?

RStudio Connect disconnects from server before app can load

I am running an application that takes quite a while to load. I have to load up 7GB worth of data which when run locally takes the app about 220 seconds. However on my deployed app it disconnects from the server after roughly 120 seconds, before it can load.
I don't know exactly what I can put here since the Log doesn't show anything. If there is anywhere I can grab information from to show you all or if this is a known issue that can be easily solved I would love to know!
Are you using shinyapps.io? The free tier only allows you to use 1GB RAM. Loading 7GB data will definitely crash the server.

Cordapp tutorial crashing in a Fedora VirtualBox Machine

I have downloaded the Cordapp example provided in the Corda website. I follow all the steps (to run it from the console) in
https://docs.corda.net/tutorial-cordapp.html
without any problem until "Running the example CorDapp". Here i get to errors one way or another.
First, when running
workflows-kotlin/build/nodes/runnodes
one or more of the nodes would not start. I was using a virtual machine with 2 cores and 4GB of RAM. Eventually, i noticed it seemed to be an issue with the RAM, so i changed the VM condig to 4 cpus and 10 GB of RAM.
Now, i can run
workflows-kotlin/build/nodes/runnodes
and get all 4 nodes working but, as soon as I run the following instruction
/gradlew runPartyXServer
Where X=[A,B,C] for each of the possible nodes, after 20-30 seconds as much, the machine repently slows down and aborts.
The VM has Fedora 30, 4 cores and 10GB of RAM. It is empty except for what i downloaded for the tutorial. I cannot believe those are not enough resources to run the tutorial, Am i wrong? Do i need more? may it be another thing?
Any help is welcome.
== Solved ==
The issue were the resources. I jumped to 8 cores and 32GB and it ran. I will try at some point with 16GB. In any case, the problem, from my point of view, is that having those large hardware requirements, the tutorial should include a section describing the minimum setup needed to run it.
From the given information, I believe you had ran into a Memory issue.
According to our documentation, Corda has a suggested minimal requirement of 1GB of Heap and 2-3GB of Host RAM per node.
https://docs.corda.net/docs/corda-enterprise/4.4/node/sizing-and-performance.html#sizing
I would suggest either reduce the number of nodes hosted on a single machine or expand your RAM size of the VM

IIS 7, Classic Application Pool, 100% CPU Usage Problem

We had IIS 6 on Win 2003, We upgraded to Win 2008 and now our app is same, basically its a simple file server to resize images and cache them and deliver to clients. Since resizing requires more memory size, running under Pipelined Pool causes no more memory errors so we went back to Classic mode and there is no ther website, only one and we are happy with it.
Today morning I saw the website was down, and I went and checked CPU Usage of server, it displayed 100% CPU usage by w3wp.exe, now we never had this problem before, the code is same that we did use in old IIS 6 and its simple Database read and Response.Write..
Restarting server solved the issue, but if I get same problem again, how can I check which part of code of our website did use such a huge cpu usage where else there is absolutely no error log and no event viewer error as well.
The code that is used in website is hardly few lines, typical DAL query to database and response.write thats all. Files are stored in blobs in database but that has nothing to do with anything because it did run successfully for 3 years with same SQL Server. The only change is IIS 7 and its Classic Application pool against IIS 6 with default app pool.
I would appriciate any tool or anyway to atleast monitor what caused this problem. We have Win 2008 running since last 30 days and we only got this error once.
In our cases, since we have 4 processors, we then increased the "number of worker process to 4" currently working well so far as compare before.
here a snapshot: http://pic.gd/c3661a

Resources