Does Quartus support in-memory synthesis? - quartus

I'm working on a project that generates a large number of components. I'm having the problem that Quartus is generating an extremely large number of files in the /db directory, on the order of hundreds of thousands.
The system I am working on has limited storage that is also very slow. Just deleting the db folder is taking over 20mins, and for the project I have to do many separate builds, so it's a significant bottle neck.
Does Quartus support keeping the db archive in ram during synthesis?
Vivado has the -in-memory option for the create_project command. Is there a Quartus equivalent? I've look through the "Quartus II Scripting reference manual" and found nothing yet.
Quartus version is 19.1
Thank you.

I was not able to find any option similar to -in-memory.
However, I'm working on a linux system, so by placing the build directory in tmpfs (ram file system) I was able to get a significant improvement in performance.

Related

Tar Compaction activity on Adobe AEM repository is stuck

I am trying to perform a Revision Cleanup activity on AEM Repository to reduce the size of the same by Tar Compaction. The Repository Size is 730 GB and Adobe Version is 6.1 which is old. The estimated time is 7-8 hours for the activity to get completed. We ran it for 24 hours straight but the activity is still running and no output. We have also tried running all the commands to speed up the process but still it is taking time.
Kindly suggest an alternative to reduce the size of the repository.
Adobe does not provide support to older versions, so we cannot raise a ticket.
Try to check the memory assigned to your machine, RAM memory I mean for JVM. Maybe if you increase it will take less and finish.
The repository size is not big at all. Mine is more than 1TB and is working.
In order to clean your repo you can try to run the Garbage Collector directly from AEM on JMX console.
The only way to reduce the datastorage is compact the server, or delete content like big assets or also big packages. Create some queries to see which assets / packages are huge and also delete them.
Hope you can fix your issue.
Regards,

Meteor build takes long time after I save my javascript file

After I save my javascript file meteor will rebuild app but it takes so long time (I have to wait for 15 minute or more)
Are there solutions to solve it.
Thanks!
Based on my personal experience,
In general case, when you quit building in between there might be some folders undeleted like below images.
In such case, delete such folders first.
If you are using Windows, RAM can be issue. If you machine has very little memory allocated then building the ever growing Meteor project become very annoying. I came to know this when I upgraded my machine from 1st generation core i5-6GB DDR2 RAM-500GB HDD to 2nd generation core i7-8GB DDR3 RAM- 248GB SSD storage drive. Machine configuration plays a vital role for Meteor building for development env and production env. Earlier configuration took 6 min to build but new configuration took less than a minute for me now.
If you have any antivirus software running behind the scene then it too might cause the slow speed for meteor build process.

Decompile a RISC system/6000 executable file

we have a old AIX server and it has an executable file and we want to rewrite the same logic of the executable file on linux server, so we are trying to read it but could not find a way to do that.could you please let us know if there is a way to decipher this file
$ file execfile
execfile: executable (RISC System/6000) or object module
The IBM RS/6000 has a POWER architecture CPU, possibly a PowerPC 603 or PowerPC 604, or possibly one of the newer models like POWER1, POWER2, POWER3, etc. The most recent (current) systems use POWER7 or POWER8.
Anyway, if the system has the compiler and toolchain installed on it then there should be a decent symbolic debugger included, and you should be able to use that to disassemble any executable. Depending on exactly which version of the OS it was compiled on, and which compiler was used, you might even be able to use PowerPC tools on some other OS, such as MacOS, or even potentially a cross-compiler toolchain on any type of system, to disassemble the program. For example GDB built for PowerPC may be able to disassemble the program.
However if the executable has been stripped of symbols (as was typically the case on AIX systems, IIRC), and especially if it had been run through the most powerful optimizing stage of the compiler, then you'll be pretty much lost and what you are trying to do will be impractical and require many man hours to decipher -- indeed many thousands of man hours for any significantly sized program, even if you're able to hire someone to help who is familiar with the code generation patterns of the particular compiler which was used.
You might be better off trying to hire an archeologist to help you dig through the specific landfill where you might hope to find listings or backup tapes or CDs or disks containing the original source code, or specification documents, etc., for this program. Seriously.
Or try to find and (re-)hire the original author(s).

Javacard Webapplication how to install *.war file

I am new in JavaCard development, and I am quite confused.
I am able to compile, load and install .cap files, and everything works fine.
However, after compiling my WebApplication (with NetBeans), I am not sure how to load/convert/install the produced .war file to the card.
Any help much appreciated!
edit:
I realized I should had provided more info:
My card is: J2E145G, which if I am not mistaken supports version 3.0 (and hence is the "connected" edition?). Additionally, I am loading applets using GlobalPlatform, which it seems that supports only .cap files(?)
I presume J2E145G (I'm not sure about the G, I can check later) contains the P5Cx family of products of NXP. These cards sport 8 KiB of RAM and are therefore incapable of running the connected edition, which requires 24 to 32 KiB of RAM.
These kind of humongous chips (for smart card standards anyway) are usually found on contact only cards. To say that connected edition chips are not common is probably taking it lightly.

WinDBG - Analyse dump file on local PC

I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.

Resources