I am unable to use add to schematic in modelsim se64 10.5. Getting this error :
Could not open the database because the required debug information has not been generated.
You need to run vsim with the -debugdb flag, this will create the vsim.dbg file which contains schematic connectivity info (amongst others).
The User Manual contains all the info you need, just search for debugdb.
Related
In Tenable Security Center (SC), we can schedule scans using audit policies got from Tenable Audit Files.
I am trying to find the source of these audit policies.
(like from where they are getting those policies and are they following any global networking standards)
Can anyone help me to find this?
An example policy inside the audit file will look like below
<custom_item>
system : "Linux"
type : FILE_CONTENT_CHECK
description : "BSI-100-2: S 4.106: Activation of system logging: /etc/rsyslog.conf - *.alert root"
info : "All changes made to /etc/syslog.conf must be documented. When making modifications to the existing IT system, at first everything should be logged. After that, individual areas can be deactivated in stages as required. The /var partition must be sufficiently large to accommodate the log files.
* Please note that the equivalent file on a Red Hat system is /etc/rsyslog.conf
Safeguard Catalogues: S 4: Hardware and software
S 4.106: Activation of system logging"
reference : "800-171|3.3.1,800-171|3.3.2,800-53|AU-12,BSI-100-2|S4.106,CN-L3|7.1.3.3(a),CN-L3|7.1.3.3(b),CN-L3|7.1.3.3(c),CN-L3|8.1.3.5(a),CN-L3|8.1.3.5(b),CN-L3|8.1.4.3(a),CSF|DE.CM-1,CSF|DE.CM-3,CSF|DE.CM-7,CSF|PR.PT-1,ISO/IEC-27001|A.12.4.1,ITSG-33|AU-12,NESA|T3.6.2,NESA|T3.6.5,NESA|T3.6.6,NIAv2|SM8,QCSC-v1|13.2,QCSC-v1|3.2,QCSC-v1|6.2,QCSC-v1|8.2.1,SWIFT-CSCv1|6.4,TBA-FIISB|45.1.1"
see_also : "https://www.bsi.bund.de/cae/servlet/contentblob/471430/publicationFile/28223/standard_100-2_e_pdf.pdf"
file : "/etc/rsyslog.conf"
regex : "*.alert root"
expect : "*.alert root"
</custom_item>
Thanks for your help in advance.
https://en.wikipedia.org/wiki/IT_baseline_protection
Also, the see_also line provides a link to a pdf on that standard.
I know a similar question already asked here but since given answer don't help me and my problem is kinda different I'm asking a new question.
I create data sources through admin panel -> Configure -> Datasources and it works fine. But if I restart server all created Datasources gonna missing from datasources list.
When I run ESB server I can see through logs that those datasources could not be loaded properly since ESB server is unable to decrypt sensitive data which it encrypted earlier:
DataSourceRepository Error in updating data source [remove:false] at path '/repository/components/org.wso2.carbon.ndatasource/myDatasource
Error in updating data source 'myDatasource' from registry
[remove:false]: Error in secure load of data source
meta info: An error occurred while decrypting data
Although myDatasource is missing from datasources list I can still see it in registry through admin panel -> Registry -> /_system/config/repository/components/org.wso2.carbon.ndatasource/myDatasource
I have the same issue. This seems to be some error introduced in 6.4.0.
6.3.0 does not exhibit this behaviour.
As a workaround, if you define the datasources in /conf/datasources/master-datasources.xml then the datasources load correctly at server startup. However this is not ideal solution as they cannot then be edited through the web console.
Alternatively you can download the registry entry , edit the password element to remove 'Encrypted=true' and change the password to be unencrypted. Then upload the edited file as a new registry entry alongside the old.
Both these methods are not feasible for production though as they leave the passwords unencrypted.
Out of interest are you running this on Windows? I discovered also on EI 6.4.0 that the Ciphertool utility will not run due to a malformed path. I suspect this might be related, but I cannot find where the malformed path is coming from it seems to be repeating the {carbon.home} element within the path :
C:\Program Files\WSO2\Enterprise Integrator\6.4.0\bin>ciphertool
Using CARBON_HOME: C:\Program Files\WSO2\Enterprise Integrator\6.4.0
Using JAVA_HOME: C:\WSO2\Enterprise Integrator\6.3.0\jdk\jdk1.8.0_192
Exception in thread "main" java.nio.file.InvalidPathException: Illegal char <:> at index 51: C:\Program Files\WSO2\Enterprise Integrator\6.4.0\C:\Program Files\WSO2\Enterprise Integrator\6.4.0\/repository/resources/security/wso2carbon.jks
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at org.wso2.ciphertool.utils.Utils.getConfigFilePath(Utils.java:98)
at org.wso2.ciphertool.utils.Utils.setSystemProperties(Utils.java:289)
at org.wso2.ciphertool.CipherTool.initialize(CipherTool.java:93)
at org.wso2.ciphertool.CipherTool.main(CipherTool.java:52)
We have a folder of elmah error logs in XML format. These files will be in millions and each file might be upto 50 kb in size. We need to be able to search on the files(eg: What errors occured, what system failed etc). Do we have a open source system that will index the files and perhaps help us search through the files using keywords? I have looked at Lucene.net but it seems that I will have the code the application.
Please advise.
If you need to have the logs in a folder in XML, elmah-loganalyzer might be of use.
You can also use Microsoft's Log Parser to perform "sql like" queries over the xml files:
LogParser -i:XML "SELECT * FROM *.xml WHERE detail like '%something%'"
EDIT:
You could use a combination of nutch+SOLR or logstash+Elastic Search as an indexing solution.
http://wiki.apache.org/nutch/NutchTutorial
http://lucene.apache.org/solr/tutorial.html
http://blog.building-blocks.com/building-a-search-engine-with-nutch-and-solr-in-10-minutes
http://www.logstash.net/
http://www.elasticsearch.org/tutorials/using-elasticsearch-for-logs/
http://www.javacodegeeks.com/2013/02/your-logs-are-your-data-logstash-elasticsearch.html
We are a couple of developers doing the website http://elmah.io. elmah.io index all your errors (in ElasticSearch) and makes it possible to do funky searches, group errors, hide errors, time filter errors and more. We are currently in beta, but you will get a link to the beta site if you sign up at http://elmah.io.
Unfortunately elmah.io doesn't import your existing error logs. We will open source an implementation of the ELMAH ErrorLog type, which index your errors in your own ElasticSearch (watch https://github.com/elmahio for the project). Again this error logger will not index your existing error logs, but you could implement a parser which runs through your XML files and index everything using our open source error logger. Also you could import the errors directly to elmah.io through our API, if you don't want to implement a new UI on top of ElasticSearch.
I know pyinotify can be used to watch for events for all files within a specific directory (recursively). How can I watch for events (say a create event) for a single file only? Basically, I need to have my python code perform some action as soon as it detects for a file with a specific extension (say *.txt) is created.
I have tried looking up for this online but couldn't fine any useful document that guides on how to use pyinotify to explicitly monitor events for a single file as opposed to all files/sub-dirs within a directory.
For e.g. I am trying to watch a 'IN_CREATE' event for a file /tmp/test.txt but when I run my pyinotify script, I get the following error:
[Pyinotify ERROR] add_watch: cannot watch /tmp/test.txt (WD=-1)
One of the articles online indicated this could be due to limit on max_user_watches, so i tried to increase that limit (fs.inotify.max_user_watches) but no luck.
Any thoughts on why I would be getting this error message or anyone already knows details about this error?
Thanks.
The /tmp/test.txt exist already ?
If not then you need to monitor recursive the /tmp and ExcludeFilter the output
I've been tasked with exporting a bunch of tables from a Btrieve (Pervasive) database but one of the tables is putting up a fight. I'm using the Pervasice Control Centre but when I run a SELECT * FROM <troublesome table> I get this error:
ODBC Error: SQLSTATE = S1000, Native error code = 0
Unable to open table: <troublesome table>.
The owner namme is invalid(Btrieve Error 51)
I've Google'd this and found out that there can be an "owner" to a DDF file but, if I understand this correctly, all tables are in that file. But there's just one table causing this error so I have no idea what's going on.
Could someone please offer some assistance.
There can be an owner name on a Btrieve file as well as the DDF. In this case, it seems that the Btrieve file has an owner name that is required to even read the file (an owner name can allow read only access without the owner name or no access).
Depending on the version of PSQL you are using, you can issue a SET OWNER= command before executing the SELECT statement. For full documentation on SET OWNER, take a look at http://docs.pervasive.com/products/database/psqlv11/wwhelp/wwhimpl/js/html/wwhelp.htm#href=sqlref/syntaxref.3.76.html.
As far as determining the Owner name, you'll have to ask the developer of the program. There is no default owner name and not master owner name.