I'm currently creating a mobile app with Apigility as the API. All is working well and life is heaven on my laptop. However, when releasing onto OpenShift we can't access images under the public folder.
I'm calling the images with the full URL "http://coolapp.com/public/images/smile.jpg" but keep on getting access denied despite changing permissions on the DIR. Once again on my mac all works fine.
Cheers :-)
It turns out version 1.0 of apigility has an issue completing update write under mac with an SSD drive. We moved project to a linux box did a manual composer update and voila! all is ok now.
Related
We have built a WebDav Service with your Engine and have a one problem when we create a new Folder or File:
The new folder / file is created successfully, but not showing in the Windows Explorer. Only if you press F5, the new folder / file is showing (and the name is already selected to be edited).
This behavior is reproducible even with a blank WebDav Solution.
We can reproduce this on Windows 7 and Windows 8 (8.1) using WebDav .NET Server 3.8 and the latest 3.9.
Is there a way to get around this “refresh-problem”?
I solved this issue but clicking in the folder explorer at view > options > then i restored to default and everything is back to normal.
I assume this issue is in Windows Explorer on a single computer. Most likely the WebDAV server-side code is failing with with some exception. Here are some ideas how to detect what is wrong:
Unmount network connections executing 'net use * /DELETE' in a command prompt, this will unmount WebDAV connections too and simulate 'clean' environment.
Retry reproducing the issue and examine your WebDAV logfile. By default it is located in /App_Data/WebDAV/Logs/ folder. Are there any exceptions in it?
Use Fiddler tool or any other debugging proxy to capture and examine HTTP requests. Are there any failed requests?
In case you are creating a folder/file on one computer using Windows Explorer (Microsoft Mini-redirector driver) or IT Hit Ajax File Browser and expect the files list to refresh automatically on another computer this would not work. Mini-redirector does not support any notifications from server and WebDAV does not submit any notifications, you need to refresh the files list manually to see the new items created.
I found this video on Youtube that explains in very much detail how to fix this problem: https://www.youtube.com/watch?v=UUiCPsQquqc
It is a bit lengthy, so I'll just quickly sum it up here:
The reason for these problems are one or more (broken) Shell Extensions that prevent the refresh of the Windows Explorer
To fix it, open up regedit.exe (requires admin privileges), do a search for the Registry Key "DontRefresh". If it is "1", set it to "0". There might be multiple matches for that Key, so repeat until all Keys have the value "0".
This might not work immediately, you may have to kill and restart your explorer.exe process (easiest to do with the Task Manager). Or you can simply reboot your computer. In my case, it worked immediately.
According to the video, the Keys should only be located under HKEY_CLASSES_ROOT/CLSID, but in my case I could only find such Keys in HKEY_LOCAL_MACHINE/Classes/Wow6432Node/CLSID.
I figured it makes most sense to simply search the complete Registry, it does not take very long.
I tred a lot of hacks, from scanning the system, to recreating the profile to hacking Registry keys and hives.
Finally what worked for me -
Right click on desktop
Select Personalize
Click Themes
Click Change desktop icons
Click Restore default & OK
And instantly it began to auto refresh with a new folder, rename, delete, copy, etc.
So, to put it simply, I have a drupal site that's live.
I want to work on it locally and use docker containers to manage that.
I want to use this Image:
https://index.docker.io/u/bnchdrff/nginx-php5-drupal/
And use this as my data container:
https://index.docker.io/u/bnchdrff/mariadb/
I have the database downloaded from the live site saved as an .sql file.
I need to be able to use this pre-existing database.
Best case scenario is to be able to run the images in terminal and open a browser, navigate to something like 'localhost' and have the Drupal site pop up there for me to work on.
I am running Ubuntu 13.10 and have the latest version of Docker. Needless to say I have been working on trying to get this working for a while but don't want to complicate things with my failed attempts. Any and all suggestions welcomed.
I have created a local version of my Drupal website to do dev work on. But when I add new modules to the local version they do install but they do not enable correctly. For example I added the legal module, enabled it, but when I go to site configuration to edit it, the legal section does not appear. Any help please?
The problem is that not enough memory will be allocated to the service to allow for new modules which require more memory to be ran, try putting
ini_set('memory_limit', '256M');
in the sites/default/settings.php file.
This is a really old thread but thought it was worth adding the following.
I had this problem recently on a local dev install (using Acquia Dev Desktop, by the way). Turning off the Memcache module resolved my issue.
I was running a Java webapp in a WAR, in WebLogic 9.2 on my computer, when my computer crashed. When I restarted my computer everything was working, but the images were not being displayed in the webapp, nor was the webapp picking up the CSS. Everything is packed into the WAR and I am not using a webLogic.xml to map anything.
Something like this ( the loss of images and CSS, not the computer crash ) happened last year and I eventually fixed it by going into the weblogic admin, stopping the webapp, uninstalling the webapp, reinstalling and starting it again.
That didn't work this time.
Anyone have any ideas what this is about and how I can resolve it.
My org is working on upgrading to WebLogic 11.2 but there is still about a month away.
Thanks much in advance for any clues.
The culprit turned out to be a new ServletFilter I was developing. The ServeltFilter was checking authentication status before letting people through to the JSPs and Servlets in my site.
While I had put conditionals in the ServletFilter to NOT do any filtering on things like the login page, I didn't do anything like to to make sure files in my /css, /images, /js directories were left alone.
When I added similar conditionals for these directories, reinstalled the *.war and restarted the *.war the problem cleared up.
I guess I wasn't seeing the problem before my computer and WebLogic crashed because the images and CSS were cached somehow........until the crash.
SHORT VERSION: I have a Flex app that uses Salesforce.com's API. I am trying to deploy it to a remote server but keep getting "Error during login process." when I try to have it log in to salesforce's servers. What gives?
LONG VERSION (maybe someone finds this useful later): I have a flex application that's an add-on for salesforce.com
If I upload it as a static file to salesforce and then embed it in a visualforce page, it works fine. This method uses "loginBySessionId" rather than loginByCredentials.
I would like to be able to run it outside of salesforce's servers. IE, I would like to host the app on my own server and have people enter their credentials in the app and have it login to salesforce's servers. This way, if someone wants to try my application, they do not have to be salesforce administrators and do not have to install the app into a visualforce page.
Here's where the trouble is. If I enter my login information and run it from the compiler, it connects and loads the right data. If I export it as a production release, it still runs fine. However, if I either upload the release files to my own server, or if I transfer them to another computer and run them locally, i get an "Error during login process" Seems some others have had similar issues, but no solutions and nothing new.
Weirder still, if I transfer the project files to another computer and recompile them, it suddenly works. So basically, seems like I have to recompile the app for each computer I plan on running it on, but that's not practical. Even still, I don't see how that could possibly be making a difference, compiling on one vs the other. And yes, same versions of flash, same versions of Flex.
Does anyone have any suggestions on how to resolve this? Am I just misunderstanding something with how to deploy flex applications or is this some screwy thing with the salesforce API and there's a workaround?
As one added thing that makes this problem particularly frustrating is that I can't use the debugger because if I compile it on another computer, it works, so in order for me to get the error I have to build, then transfer to another computer. I feel like this could be a key to the problem, but I'm not sure how.
Here is some applicable code, pretty basic:
<flexforforce:F3WebApplication
id="app" statusChanged="statusChangedHandler(event)"
loginComplete="loginCompleteHandler(event)"
loginFailed="loginFailedHandler(event)"
sessionExpired="sessionExpiredHandler(event)"
serverUrl="http://na9.salesforce.com/services/Soap/u/19.0"
requiredTypes="Account,Contact,Opportunity,Lead,Task,User" />
protected function loginClickHandler( event : MouseEvent ) : void {
_username = 'LOGIN#LOGIN.COM';
_password = 'PASSWORD+SECURITY_TOKEN';
CursorManager.setBusyCursor();
app.loginByCredentials( _username, _password );
}
To clarify, you probably need something like this on initialization :
flash.system.Security.loadPolicyFile("http://na9.salesforce.com/services/Soap/crossdomain.xml");
The reason it works when you compile it is that a lot of the default security is not applicable when on same machine as compiled. Heck, you can even access the hard drive in paths (like a relative URL path to an image on the hard drive) - try running the swf on another computer and bam- no go.
This is an excellent indicator you're hitting a player / VM security issue :)