I have an open source asp.net site that is meant to run under apache/mono. I have a new version I am all ready to release but it has a problem running under linux. Everything works great on windows.
The first time I load the page I get:
System.InvalidOperationException Failed to map path '/App_WebReferences/MythContent/ContentServices.wsdl'
The file is there:
-rw-r--r-- 1 www-data www-data 37K Nov 20 2013 mobilemyth/App_WebReferences/MythContent/ContentServices.wsdl
Refresh the page again and you get:
System.IO.IOException
Sharing violation on path /tmp/www-data-temp-aspnet-0/9e3969b/Resources.frontendsettings.aspx.resources
And that is all you get from them on. Any ideas?
In Mono, the best solution for this issue is using the wsdl2 command line and generate the class an put it in the App_Cod. So you can use the WebService client in your app.
Sample:
$ wsdl2 http://< your url of wsdl >?WSDL
Related
> D:\>echo %HADOOP_HOME%
> D:\Apps\winutils\hadoop-2.7.1
Create tmp/hive folders on the same disk as HADOOP_HOME
D:\>dir tmp\hive
Directory of D:\tmp\hive
06/13/2016 01:13 PM <DIR> .
06/13/2016 01:13 PM <DIR> ..
0 File(s) 0 bytes
2 Dir(s) 227,525,246,976 bytes free
Try to figure out what permission are set
D:\>winutils.exe ls \tmp\hive
FindFileOwnerAndPermission error (1789): The trust relationship between this workstation and the primary domain failed.
When I tried chmod for this folders it seems work
winutils.exe chmod 777 \tmp\hive
but ls shows same exception
Does anyone has an idea what is going on ? Moreover, It works for me a couple hours ago but now my spark application fails with an exception
java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
I am quite late here still posting it so it might help someone in future.
While setting the permission, make sure you are using correct path for winutils.exe (try to use complete path). For me winutils.exe was in C drive:
C:\path\to\winutils.exe chmod -R 777 C:\tmp\hive
Run the below command to check the permission and it should look like below image ([setting and checking the permission : click to see the image]):
https://i.stack.imgur.com/vE9vl.png
If this is your corporate system the you must be on the same network using VPN or Forti Client or any other tool your organisation has been using
https://support.microsoft.com/en-us/kb/2771040
Looks like domain access issues, please ensure you can access domain and take a try again.
After ensure domain access, below error disappear
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions a
re: rw-rw-rw-
I'm late here and I just encountered this issue. Writing this so it will help somebody.
If you are using your office laptop, make sure you are connected to office network and retry. The domain Member of Domain settings point to your office network. That must solve the issue.
Log on Windows 10 using local Administrator account
Hold Windowslogo and press E to open File Explorer
3.On the right side of the File Explorer right click on This PC and choose Properties Click Advanced System Settings
Choose Computer Name tab and select change to see the value configured.
I am a newbie here, so it might be wrong but I think you need to add -R in the command as below:
winutils chmod -R 777 \tmp\hive
I'm very new to the ELK stack and was trying to add some security settings (username and password) to access Kibana following the instructions from the link below:
https://www.elastic.co/blog/getting-started-with-elasticsearch-security
At Step 4: Security in Kibana, once the yml file modified, I try to launch Kibana from the terminal with the command ./bin/kibana but it displays the following errors :
./bin/kibana: line 24: /usr/local/var/homebrew/linked/kibana-7.6.2-linux-x86_64/bin/../node/bin/node: cannot execute binary file
./bin/kibana: line 24: /usr/local/var/homebrew/linked/kibana-7.6.2-linux-x86_64/bin/../node/bin/node: Undefined error: 0
I think I've followed all the previous steps carefully and everything else worked so far.
I'm using a Mac and the error seems to be very basic. Any clue?
Thanks for the help.
Looks like you've downloaded the wrong architecture (Linux) of Kibana on your Mac.
This generally happens when the architecture is not compatible with system architecture or running a 64 bits on 32 bits machine.
Simple solution:
Download the mac version of Kibana from here - https://www.elastic.co/downloads/kibana
Once downloaded, run ./bin/kibana in Kibana directory.
This will successfully start local server of Kibana on localhost:5601
After the fresh install of JFrog Artifactory server I am not able to start up the application. I am using a percona 5.6 mysql db server.
artifactory.service - Setup Systemd script for Artifactory in Tomcat Servlet Engine
Loaded: loaded (/usr/lib/systemd/system/artifactory.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: resources) since Mon 2018-01-22 04:22:34 EST; 2s ago
Process: 8618 ExecStart=/opt/jfrog/artifactory/bin/artifactoryManage.sh start (code=exited, status=0/SUCCESS)
Jan 22 04:22:34 xxx-xxxx-xxxx systemd[1]: Failed to start Setup Systemd script for Artifactory in Tomcat Servlet Engine.
Jan 22 04:22:34 xxx-xxxx-xxxx systemd[1]: Unit artifactory.service entered failed state.
Jan 22 04:22:34 xxx-xxxx-xxxx systemd[1]: artifactory.service failed.
When I checked artifactory.bootstrap.log I am getting following error.
2018-01-22 04:07:43 [ARTIFACTORY] [INFO ] master.key file currently missing - waiting for Access to create it. Reattempting to check master.key file existence in 1 second.
If you need any more logs to diagnose please let me know.
Regards,
I had the same problem. Running under Ubuntu 16.04 and the deb installation package for 5.10.1.
It looks like the $ARTIFACTORY_HOME points to /var/opt/jfrog/artifactory, with the binaries in /opt/jfrog/artifactory. Changing $ARTIFACTORY_HOME to point to /opt/jfrog/artifactory allows it to start, but puts all the data there as well (access, backup, data, logs and support directories). This is fine for me as we only have a demo licence and we're just doing some testing, but less than ideal if you want to keep your data under /var.
I did try leaving $ARTIFACTORY_HOME alone and creating links to the other directories but that didn't seem to work. I haven't bothered investigating that though, since I now have a test system that looks like it works.
I had this error recently with version 6.7.3 using java 8 on Solaris 11.
First I unzipped the file, checked all my file permissions, they were good.
Then, I set my ARTIFACTORY_HOME and JAVA_HOME in the artifactory.default file
Next, I ran the following
openssl rand -hex 16 > /m1/artifactory-oss-6.7.3/etc/security/master.key
and started Artifactory.
I found the openssl command to manually create the master.key here:
https://www.jfrog.com/jira/browse/RTFACT-15699
Check this link:
https://www.jfrog.com/confluence/display/RTF4X/Master+Key+Encryption
go to Security >> Security Configuration >> Encript button
that will create the artifactory.key file, duplicate it and rename the copy to master.key
Edit 1
found out in version 5 you need the JAVA_HOME set, in version 4 doesn't matter,
i got similar issue and fixed it by correcting owner / group permissions on /var/opt/jfrog/artifactory/access/etc path.
i was creating the below directories using ansible playbook before running artifactory docker image.
/var/opt/jfrog/artifactory/access/etc
/var/opt/jfrog/artifactory/backup
/var/opt/jfrog/artifactory/data
/var/opt/jfrog/artifactory/etc
/var/opt/jfrog/artifactory/logs
You will get these kind of messages in the log if the DB-connection fails also.
There was error messages about the DB-connection earlier but they where hidden by secondary failures.
The master.key is created after the DB is initialized.
I have a frustratingly simple problem. I've migrated a classic ASP site from a Windows Server 2003 IIS site to a shiny new 2012 R2 Server running IIS 8.5.
When I navigate to the home page (http://mydns/default.asp), I get a 500 error. The IIS logs show me this error:
GET /default.asp |4|ASP_0126|Include_file_not_found
Looking at the default.asp page, I see the following code at the top of the file:
<%# Language=VBScript %>
<!-- #include file="Include/common.asp" -->
<%
dim EMAIL_ID
if Request.QueryString = "" then
...
So, my assumption is that the error is being thrown due to the second line. In the website directory, the Include directory does exist, and there is a common.asp file inside of it.
# the default.asp page that I'm loading...
E:\websites\mywebsite>dir | findstr default
09/29/2015 10:30 AM 3,237 default.asp
# the common.asp page within the Include folder...
E:\websites\mywebsite>dir Include | findstr common
06/30/2015 10:27 AM 546 common.asp
Any idea why I would be getting an error about a file not found when I do see it in that directory?
Update 1: By navigating to the common.asp page in a browser, I get the following error in IIS:
GET /Include/common.asp |42|800a01ad|ActiveX_component_can't_create_object:_'Domain.clsAuth'
I found this article which helped me confirm that a DLL is missing so I am pursuing installing it on the new server.
Update 2: I was able to find the required Domain.dll on the old server. I used Dependency Walker to determine that Domain.dll depended on MSVBVM60.DLL so I grabbed both of these and moved them to the new server. I registered MSVBVM60.DLL successfully using C:\WINDOWS\system32>regsvr32.exe MSVBVM60.DLL, but registering Domain.dll failed. I got the following error message:
It turns out that this is the same error that you get if you try to register a file that doesn't exist. For instance, I tried to register "dummy.dll" and got the same error. It's like the system doesn't see the file. It autocompletes at the command line, but I can't register it. I also can't open it in Dependency Walker... the application says file not found. But I know it's there...
C:\WINDOWS\system32>dir | findstr Domain
04/27/2006 02:57 PM 24,576 Domain.dll
Update 3: The issue about registering the DLL was a 32 vs 64 bit thing. I had a 32 bit DLL in the system32 folder which is a 64 bit only folder. Once I moved it to the SysWOW64 folder (which is a 32 bit folder, go figure) I was able to register it. I also unregistered MSVBVM60.DLL from system32 and found that it was already available in SysWOW64.
So, this DLL issue is solved! But I'm getting the same error again due to a different include file. To be continued...
Update 4: The final include file error was indeed a missing include file.
Answering my own question... In my case, the issue was that the include file in question was throwing errors. I guess the fact that the ASP engine couldn't render the ASP include file resulted in the parent page throwing a "not found" error. By installing and registering a missing DLL, I was able to resolve my issue. Checkout the updates in the question for details.
I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr