Im new to plone and linux and im trying to install the new patch on plone (Plone Hotfix 20121106)
Im using buildout to install this one and it seems that as i check my log, it's not downloading/installing the patch at all.
Here is the eggs content of my buildout.cfg:
eggs =
Products.PloneHotfix20110928
Products.Zope_Hotfix_20110622
Products.PloneHotfix20121106
what i did is stop the plone service first then do
sudo ./bin/buildout -Nv
and start my plone again.
and as i check the instance.log
it only installed Products.PloneHotfix20110928 and Products.Zope_Hotfix_20110622
anyone help me pls?
thanks in advance.
Check in your eggs directory to make sure the egg has been downloaded there, if not re-run buildout and watch/grep the console messages to make sure it gets downloaded successfully. If it doesnt come down for some reason you can manually download it and add it to your eggs directory.
THen if you start Plone in foreground you should see the hotfix being installed in the console messages eg
bin/instance fg
then you should see some output in the console like:
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied registerConfiglet patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied setHeader patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied allow_module patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied get_request_var_or_attr patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied kssdevel patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied widget_traversal patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied gtbn patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied kupu_spellcheck patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied membership_tool patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied queryCatalog patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied uid_catalog patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied renameObjectsByPaths patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied at_download patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied safe_html patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied python_scripts patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied getNavigationRootObject patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied crypto_oracle patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied crypto_oracle_protect patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied ftp patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied atat patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Applied random_string patch
2012-11-06 23:51:08 INFO Products.PloneHotfix20121106 Hotfix installed
Placed the patch (PloneHotfix20121106) to Products folder on Plone,
Unpack the zip file,
run instance (bin/instance fg)
start the plone service
Related
I host a web site with Nginx in Debian in google cloud.
when I install code server and i visit: my_ip:8443, chrome replied with: ERR_CONNECTION_TIMED_OUT.
when I execute the command code-server:
INFO code-server v1.1156-vsc1.33.1
INFO Additional documentation: http://github.com/cdr/code-server
INFO Initializing {"data-dir":"/home/naji/.local/share/code-server","extensions-dir":"/home/naji/.local/share/code-server/extensions","working-dir":"/","log-dir":"/home/naji/.cache/code-server/logs/20190723145420188"}
INFO Starting webserver... {"host":"0.0.0.0","port":8443}
WARN No certificate specified. This could be insecure.
WARN Documentation on securing your setup: https://github.com/cdr/code-server/blob/master/doc/security/ssl.md
INFO
INFO Password: ******************
INFO
INFO Started (click the link below to open):
INFO https://localhost:8443/
INFO
INFO Starting shared process [1/5]...
WARN stderr {"data":"(node:17025) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.\n"}
INFO Connected to shared process
what is the solution ?
After upgrading to Artifactory Pro 6.2.0 using [RELEASE] for the version in the requested path to get the latest Maven release artifact no longer seems to work.
$ wget --no-check-certificate -N --user=reader --password=****** -P . https://artifactory.***.com/artifactory/libs-release-local/envision/tools/envision-buildtools/\[RELEASE\]/envision-buildtools-\[RELEASE\].tgz
Warning: wildcards not supported in HTTP.
--2018-08-14 10:45:59-- https://artifactory.****.com/artifactory/libs-release-local/envision/tools/envision-buildtools/[RELEASE]/envision-buildtools-[RELEASE].tgz
Resolving artifactory.***.com... 10.***.**.**
Connecting to artifactory.***.com|10.***.**.**|:443... connected.
HTTP request sent, awaiting response... 400 Bad Request
2018-08-14 10:45:59 ERROR 400: Bad Request.
Is there any work-around or fix for this?
In the end this was a Tomcat issue - not an Artifactory bug. We had to update the server.xml to add the relaxedPathChars and relaxedQueryChars attributes as shown below.
<Connector port="8081" sendReasonPhrase="true" relaxedPathChars='[]' relaxedQueryChars='[]'/>
I've just installed Jenkins on a google cloud vm and configured nginx to point at 8080. I can enter the initial admin password and then i get to the screen where i can select plugins. When i click on "install suggested plugins" an error appears:
No valid crumb was included in this request
I started Jenkins with the command:
java -Dhudson.security.csrf.requestfield=Jenkins-Crumb -jar jenkins.war
stdout says:
INFO: Session node016ikde2z4paqem02o7wos0rgd1 already being invalidated
Nov 02, 2017 7:57:44 PM hudson.security.csrf.CrumbFilter doFilter
WARNING: Found invalid crumb 27d19a27be31d1d5703128b635b60c3b. Will
check remaining parameters for a valid one...
Nov 02, 2017 7:57:44 PM hudson.security.csrf.CrumbFilter doFilter
WARNING: No valid crumb was included in request for
/pluginManager/installPlugins. Returning 403.
does anybody know how i can either disable CSRF or include a valid crumb in my request? I can generate a valid crumb by running:
$ curl -u "admin:ebdcf2fcf6f74ee8b4ec907a1486ml?xpath=concat(//crumbRequestField,":",//crumb)'
Jenkins-Crumb:ef6250c9afe294555e20f1b9ab875261
but i don't know what to do with it after that.
Many thanks!
To Disable CSRF (although this is not recommended), follow the below 3 steps:
Log in to Jenkins as an Administrator
GOTO: Jenkins > Manage Jenkins > Configure Global Security and enable Prevent
Cross Site Request Forgery exploits
Uncheck this option
Mention the version of the Jenkins you are using to suggest on how to provide a valid crumb in your request.
We're running RedHat 6.4 on 2 of our nodes.
We've installed the new Cloudera Manager 5.5.0 and we've been trying to create a cluster and add a first node to it (node is initially clean of any Cloudera component). Unfortunately, during the cluster installation, Cloudera Manager gets stuck every time at :
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager Server (check firewall rules).
Ensure that ports 9000 and 9001 are not in use on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added. (Some of the logs can be found in the installation details).
If Use TLS Encryption for Agents is enabled in Cloudera Manager (Administration -> Settings -> Security), ensure that /etc/cloudera-scm-agent/config.ini has use_tls=1 on the host being added. Restart the corresponding agent and click the Retry link here.
We looked around and saw how this is usually caused by a misconfigured /etc/hosts file. So we edited ours on both Cloudera Manager and the new node, did a service network restart as well as service cloudera-scm-server restart but it didn't work either.
Here's what the /etc/hosts file looks like :
127.0.0.1 localhost
10.186.80.86 domain.node2.fr.net host
10.186.80.105 domain.node1.fr.net mgrnode
We also tried some cleaning up before relaunching the cluster creation by deleting scm_prepare_node.* and .scm_prepare_node.lock.
We looked at service cloudera-scm-agent status on the new node after each installation fail as well, and we noticed that the service isn't running (even when we do a service restart, the result is still the same)
service cloudera-scm-agent start
Starting cloudera-scm-agent: [ OK ]
service cloudera-scm-agent status
cloudera-scm-agent dead but pid file exists
Here's the agent logs on the new node side :
tail -f /var/log/cloudera-scm-agent/cloudera-scm-agent.log
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Agent Logging Level: INFO
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO No command line vars
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Missing database jar: /usr/share/java/mysql-connector-java.jar (normal, if you're not using this database type)
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Missing database jar: /usr/share/java/oracle-connector-java.jar (normal, if you're not using this database type)
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Found database jar: /usr/share/cmf/lib/postgresql-9.0-801.jdbc4.jar
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Agent starting as pid 24529 user cloudera-scm(420) group cloudera-scm(207).
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Because agent not running as root, all processes will run with current user.
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent WARNING Expected mode 0751 for /var/run/cloudera-scm-agent but was 0755
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Re-using pre-existing directory: /var/run/cloudera-scm-agent
[30/Nov/2015 15:07:29 +0000] 24529 MainThread agent INFO Re-using pre-existing directory: /var/run/cloudera-scm-agent/cgroups
Is there anything we're doing wrong?
Thanks in advance for your help!
This time we just created the cluster with the root user (didn't check the single user mode)
Besides, our host had no internet access, and having created our own repository we needed to do one last step before launching the cluster creation which is importing the GPG key on the host using this command :
sudo rpm --import
If anybody finds themselves facing the same problem, hope this helps!
I installed openstack. All services are running successfully.
[root#test ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert localhost.localdomain nova enabled :-) 2012-11-06 04:25:36.396817
nova-scheduler localhost.localdomain nova enabled :-) 2012-11-06 04:25:41.735192
nova-network compute nova enabled :-) 2012-11-06 04:25:42.109157
nova-compute compute nova enabled :-) 2012-11-06 04:25:43.240902
After that I change HOSTNAME in /etc/sysconfig/network to myhost.mydomain. Then restart the services.
Now I get the duplicate entry for the services.
[root#test ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert localhost.localdomain nova enabled XXX 2012-11-06 04:25:36.396817
nova-cert myhost.mydomain nova enabled :-) 2012-11-06 05:25:36.396817
nova-scheduler localhost.localdomain nova enabled XXX 2012-11-06 04:25:41.735192
nova-scheduler myhost.mydomain nova enabled :-) 2012-11-06 05:25:41.735192
nova-network compute nova enabled :-) 2012-11-06 04:25:42.109157
nova-compute compute nova enabled :-) 2012-11-06 04:25:43.240902
From these services old services are not running.
I want to remove the services for host localhost.localdomain.
I check the nova-manage service --help but there is no option for the delete :(.
[root#test ~]# nova-manage service --help
--help does not match any options:
describe_resource
disable
enable
list
Looking at your example above, I suspect you're seeing a duplicate because you have two hosts with their hostnames set identically. If this is the case, the following code/answer isn't likely to help you out too much. There's an implicit assumption in that whole setup that hostnames of nodes upon which nova worker processes run will be unique.
In the latest branch, there isn't a command explicitly enabled for this, but the API exists underneath to do what you're after. Here's a snippet of code (untested!) that should do what you want; or at least point you to the relevant API if you're interested.
from nova import context
from nova import db
hostname = 'some_hostname'
service_name = 'nova_service_you_want_to_destroy'
ctxt = context.get_admin_context()
service = db.service_get_by_args(ctxt, hostname, service_name)
#... pick one of these services ...
#... assign it to 'service'
db.service_destroy(ctxt, service[id])
NOTE: this will remove the service from the database, or raise an exception if it doesn't exist (or something else goes wrong). If the service is running, expect that it will just "show up" again, as the service list is populated by the various nova worker agents processes reporting in.