I have Wordpress deployed in Azure AppService with containers (Azure Container registry is used)
the image used is from the docker hub -> wordpress:latest
I also have --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE enabled so my files are persisted in the VM
I have noticed that images are not displayed
I see a 502 error - https://{website}.azurewebsites.net/wp-includes/images/spinner-2x.gif
I have checked with KUDU and the image is there
Could anyone point me into the right direction to fix this issue?
I have followed steps from this tutorial: https://learn.microsoft.com/en-us/azure/app-service/tutorial-multi-container-app
I opened a support ticket with Azure who said this is a known issue. The current workaround is to disable the following apache settings in the apache2.conf file:
EnableMMAP Off
EnableSendfile Off
If you're using Azure's base PHP image from mcr.microsoft.com/appsvc/php, they've built in these apache settings in their 7.4-apache_20210422.1 version (and presumably any later versions). See https://mcr.microsoft.com/v2/appsvc/php/tags/list to list image versions.
Setting WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE enable persistent shared storage. You then need to use the WEBAPP_STORAGE_HOME environment variable that points to /home in your folder paths and volumes.
${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
Documentation
Related
I have an instance of Artifactory OSS (latest version) running in a docker container locally.
We have a remote instance of Artifactory (non-OSS) running as well.
In my local instance, I set up a remote repository of package type Ivy pointing to each of the repositories we have set up in our remote non-OSS instance.
Once I have created a remote repository configuration, I can view each remote repository I created and its artifacts being served under the Application --> Artifactory --> Artifacts page HOWEVER under the Administration --> Repositories --> Repositories page where I am expecting to be able to make changes to the configurations (I have logged in as Admin with administrator privileges btw), none of the repositories I set up are actually visible here! I only see a 0 Repositories count and a No remote repositories message where I am expecting to see a list.
When I do Ivy resolves against my local (remote repository) it works as expected, so they're definitely working... just not showing up for administration.
I have tried rolling back to earlier versions of Artifactory OSS but that hasn't changed anything.
I can tediously work around this with a combination of the REST API and the UI but I REALLY JUST want to be able to administer the configurations within the web app... Am I just being dumb about something or is there a known issue regarding this? I have searched around and I haven't found an answer to this issue, so any help or direction would be greatly appreciated! Thanks!
I was able to reproduce it on my end on as well on version 7.19.1, seems this issue happens when working with Ivy on Artifactory OSS.
I have opened a bug for this on JFrog Jira.
I have been given the assignment of customizing an Alfresco Community Edition 7.0 installation from docker-compose. I have looked at the resources and am looking for the best approach. I also see a github repository for acs-packaging but that appears to be related to the enterprise version. I could create images off the existing images and build my own docker-compose file that loads my images. This seams to be a bit of an overkill for changes to the alfresco global properties file.
For example, I am moving the DB and file share to docker volumes and mapping to host directories. I can add the volume for Postgres easily to the docker compose file. The file share information appears to be less straight forward. I see there is a global property that specifies the directory in alfresco-global.properties (dir.root=/alfresco/data). It is a little less clear how many of the docker components need the volumes mapped.
You should externalize your directory to setup persistence data storage for content store, solr etc. in your custom docker image.
volumes:
- alfdata:/usr/local/tomcat/alf_data
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
- solrdata:/opt/alfresco-search-services/data
volumes:
- amqdata:/opt/activemq/data
Please refer the link for more information.
-Arjun M
Consider going through this discussion, and potentially using the community template:
https://github.com/Alfresco/acs-community-packaging/pull/201
https://github.com/keensoft/docker-alfresco
I'm currently migrating a wordpress installation to azure app services with containers. First I did a normal installation with everything inside the container for testing purpose. The performance was good and and things worked without problems.
Then I wanted to add the wp-content folder to a persistent folder, for this I created a file share and added it under Path mappings. This worked without problems and after the restart Wordpress could access the files.
But now every page load takes about 1-2 minutes and the page as whole is unusable in this stage. I double checked the the file share settings and everything else. Share is optimized for transactions and as soon as I remove the volume, the container is fast as light again.
Does anyone have the same problem? Any ideas how to fix this? This is a deal break for me tbh.
Thanks!
Not answering your question directly but an alternative is to use App Service Persistent Storage that store data in /home folder of the VM where your app is running. It should be a lot faster then using a File Share in a storage account. The ${WEBAPP_STORAGE_HOME} maps to the /home folder.
You need to enable by setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in the application settings or by using the CLI:
az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE
So, I am trying to set up a CI/CD pipeline with the s4sdk. I successfully completed all the steps descriped in this blog. Everything seems to be running smoothly, however my build is failing with the following error message:
The following artifacts could not be resolved: com.sap.xs2.security:security-commons:jar:0.28.6, com.sap.xs2.security:java-container-security:jar:0.28.6, com.sap.xs2.security:java-container-security-api:jar:0.28.6, com.sap.security.nw.sso.linuxx86_64.opt:sapjwt.linuxx86_64:jar:1.1.19: Could not find artifact com.sap.xs2.security:security-commons:jar:0.28.6 in s4sdk-mirror (http://s4sdk-nexus:8081/repository/mvn-proxy/)
Now, this error messages makes sense to me, since I remember downloading these artifacts from the SAP download center and therefore those artifacts are not available on maven central.
I think this error can be resolved by manually uploading those artifacts to the nexus server, but I don't know how. According to the nexus documentation, there is a web ui reachable under http://< cx-server-ip>:8081, but it is somehow not responding.
I can confirm with docker ps that both the jenkins and nexus container are running and that the nexus container is listening on TCP 8081. I am also able to reach the jenkin's frontend to configure and run my pipeline.
What am I missing? Is uploading the missing artifacts to the nexus the right approach? Any help is appreciated.
The nexus container you see acts as a download cache and is by design not accessible from outside to prevent accidental changes to it. Also, its life-cycle is controlled by the cx-server script, so even if you installed packages there manually, they would be gone once you upgrade the Jenkins.
I think the best way to handle this would be to set up another Nexus instance where you install the required packages and configure the pipeline to use that as described here (mvn_repository_url). This nexus needs to be configured as a mirror for Maven central. We don't have specific docs on how to do that, but this post describes a similar setup.
In this set up, you might want to disable the download cache as it is redundant (cache_enabled to false).
I hope this helps.
Kind regards
Florian
The sidecar nexus acts as a read-only cache for maven and npm artifacts on the host (and agents) where cx server is running. By default it looks up artifacts from maven central and the default npm registry. In the current implementation, the cache will be completely deleted after stopping cx server, leading to a loss of all internal state.
If you want to use custom sources, you can set them in server.cfg via mvn_repository_url and npm_registry_url. This is documented in the operations guide, which you can find here: https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/doc/operations/operations-guide.md
In your case, you have to specify a maven repository which includes the dependencies in question.
I had a perfectly working instance of a WP-CLI wordpress plugin to upload files to S3 using the AmazonS3FullAccess policy. I migrated servers, and the copy started failing. "Failed to copy or write".
I even included the Full Administrator access to the IAM policy just to see what's going on when there are no restrictions, and the copy is still failing. Any idea what might be wrong?
Things I have tried: ensure time (via NTPD synchronization) on the new server is correct. Cross check the environment: php version, etc. The application files are exactly the same. I also used the host files method to check the previous server and it is working well.
Solved the problem by creating new access keys. For some reason, it seems that migrating a server will make the old access keys stop working? Ah, well.
P.S. I also downgraded the policies right back, to only what the application needs.