I have a scheduled job (i'm using apscheduler.scheduler lib) that needs access to the plone site object, but I don't have the context in this case. I subscribed IProcessingStart event, but unfortunately getSite() function returns None.
Also, is there a programmatic way to obtain a specific Plone Site from Zope Server root?
Additional info:
I have a job like this:
from zope.site import hooks
sched = Scheduler()
#sched.cron_schedule(day_of_week="*", hour="9", minute="0")
def myjob():
site = hooks.getSite()
print site
print site.absolute_url()
catalogtool = getToolByName(site, "portal_catalog")
print catalogtool
The site variable is always None inside a APScheduler job. And we need informations about the site to run correctly the job.
We have avoided to execute using a public URL because an user could execute the job directly.
Build a context first with setSite(), and perhaps a request object:
from zope.app.component.hooks import setSite
from Testing.makerequest import makerequest
app = makerequest(app)
site = app[site_id]
setSite(site)
This does require that you open a ZODB connection yourself and traverse to the site object yourself.
However, it is not clear how you are accessing the Plone site from your scheduler. Instead of running a full new Zope process, consider calling a URL from your scheduling job. If you integrated APScheduler into your Zope process, you'd have to create a new ZODB connection in the job, traverse to the Plone site from the root, and use the above method to set up the site hooks (needed for a lot of local components anyway).
Related
I've enabled RBAC as environment variable in docker-compose file.
- AIRFLOW__WEBSERVER__RBAC=True
I want to capture the user who kicked off a dag inside my dag files.
I tried using from flask_login import current_user. But, I get the value of current_user as None.
May I know how to capture user details using RBAC?
According to Airflow documentation as part of RBAC security model that is handled by Flask AppBuilder (FAB):
Airflow uses flask_login and exposes a set of hooks in the
airflow.default_login module. You can alter the content and make it
part of the PYTHONPATH and configure it as a backend in
airflow.cfg.
Flask-login module provides user management operations, thus you can fetch current user within a dedicated property flask_login.current_user, adding
some extra fields, as described in #3438 pull request:
if current_user and hasattr(current_user, 'user'):
user = current_user.user.username
elif current_user and hasattr(current_user, 'username'):
I suppose that you can use current_user.user.username to fetch a user login.
I'm running into an issue integrating Spring Security with my Elastic Beanstalk app backed by a MySql database. If I deploy my app I'm able to login in correctly for some time but eventually I'll start to receive login errors without an exception being thrown so I'm unable to get any useful information about the issue. I've downloaded the logs as well and can't see anything of value. I can see where the logs show accessing the public page, attempting to access the private section, returning the login page, and then the loginError page; however, nothing about any issue.
Even though I'm unable to login through a browser I am able to login if I run the app from an IDE as well as view the db in MySQL Workbench. This suggests to me the problem is due to some persistent state on the server.
I've had a similar problem before with another Beanstalk app using Spring Security and was able to resolve it by setting application properties as follows:
spring.datasource.test-on-borrow=true
spring.datasource.validation-query=SELECT 1
I'm using a more recent version of Spring than that app and the properties have been changed to specific datasources so I tried adding the following properties:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.validation-query=SELECT 1
When that didn't work I added another based on an answer to a similar question here; now the properties are:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.test-while-idle=true
spring.datasource.tomcat.validation-query=SELECT 1
That seemed to work (possibly due to less login activity) but eventually resulted in the same behavior .
I've looked into the various properties available but before I spend a lot of time randomly setting and/or overriding default settings I wanted to see if there's a reliable way to deal with this.
How can I configure my datasource to avoid login errors after long periods of time?
This isn't a problem of specific configuration values but with where those configurations reside. The default location for the application.properties (/resources; Intellij) is fine for deploying as a jar with an embedded Tomcat server but not as a war with a provided server. The file isn't found/used so no changes to the file affect the one given by AWS.
There are a number of ways to handle this; I chose to add an RDS configuration bean in my SpringBootServletInitializer:
#Bean
public RdsInstanceConfigurer instanceConfigurer() {
return () -> {
TomcatJdbcDataSourceFactory dataSourceFactory =
new TomcatJdbcDataSourceFactory();
// Abondoned connections...
dataSourceFactory.setRemoveAbandonedTimeout(60);
dataSourceFactory.setRemoveAbandoned(true);
dataSourceFactory.setLogAbandoned(true);
// Tests
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setTestOnReturn(false);
dataSourceFactory.setTestWhileIdle(false);
// Validations
dataSourceFactory.setValidationInterval(30000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(30000);
dataSourceFactory.setValidationQuery("SELECT 1");
return dataSourceFactory;
};
}
Below are the settings that worked for me.
From Connection to Db dies after >4<24 in spring-boot jpa hibernate
dataSourceFactory.setMaxActive(10);
dataSourceFactory.setInitialSize(10);
dataSourceFactory.setMaxIdle(10);
dataSourceFactory.setMinIdle(1);
dataSourceFactory.setTestWhileIdle(true);
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setValidationQuery("SELECT 1 FROM DUAL");
dataSourceFactory.setValidationInterval(10000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(20000);
dataSourceFactory.setMinEvictableIdleTimeMillis(60000);
I got newly created Magnolia instance. I tried to create an app via bundled groovy script and publish news to public instance. I got this error
It happened because 'ebtnews' workspace is not synchronised from author to private. So the question is how to sync workspace from author to private?
What I do is every time I added a new workspace in the module definition xml for my author instance, I make sure I also added this workspace in the module definition xml for my public instance. Then need to restart both author and public instance for it to create the new workspace.
Alternatively, you can just run following via groovy console/script:
// create workspace
Components.getSingleton(RepositoryManager.class).createWorkspace(app_repository, app_workspace)
// check we registered all right
appSession = ctx.getJCRSession(app_workspace)
// register node type
nodeTypeManager = appSession.getWorkspace().getNodeTypeManager()
type = NodeTypeTemplateUtil.createSimpleNodeType(nodeTypeManager, app_node_type, Arrays.asList(NodeType.NT_HIERARCHY_NODE, NodeType.MIX_REFERENCEABLE, NodeTypes.Created.NAME, NodeTypes.Activatable.NAME, NodeTypes.LastModified.NAME, NodeTypes.Renderable.NAME))
nodeTypeManager.registerNodeType(type, true)
appSession.save()
// double check it registered all right
nodeTypeManager.getNodeType(app_node_type)
You will also want to register basic security rights for the workspace, set it under subscriber workspace mapping to enable activation and possibly include/exclude it from list of triggers for flushing cache upon update of content on public instance.
You can find code to do all that in createAppScript sample script in groovy module. Code I've pasted above is actually from the same script.
Advantage being that you can do all that at runtime w/o restart. Disadvantage, that you need to run same code on each instance.
I'm looking at migrating business processes into Windows Workflow, the client app will be ASP/MVC and the workflows are likely to be hosted via IIS.
I want to create a common 'simple task' activity which can be used across multiple workflows. Activity properties would look something like this:
Related customer
Assigned agent
Prompt ("Please review PO #12345")
Text for 'true' button ("Accept")
Text for 'false' button ("Reject")
Variable to store result in
Once the workflow hits this activity a task should be put into a db table. The web app will query the table and show the agent a list of tasks they need to complete. Once they hit accept / reject the workflow needs to resume.
It's the last bit that I'm stuck on. What do I need to store in the DB table to resume a workflow? Given that the tasks table will be used by multiple workflows how work I instantiate the workflow to resume it? I've looked at bookmarks but they assume that you know the type of workflow that you're resuming. Do I need to use reflection or is there a method in WF where I can pass a workflow id and it will instantiate it?
You can use workflow service and control its via ControlEndPoint.
For more info about controlendpoint you can refer at
http://msdn.microsoft.com/en-us/library/ee358723.aspx
I'm working on an Adobe AIR application which can upload files to a web server, which is running Apache and PHP. Several files can be uploaded at the same time and the application also calls the web server for various API requests.
The problem I'm having is that if I start two file uploads, while they are in progress any other HTTP requests will time out, which is causing a problem for the application and from a user point of view.
Are Adobe AIR applications limited to 2 HTTP connections, or is something else probably the issue?
From searching about this issue I've not found much but one article did indicated that it wasn't limited to just two connections.
The file uploads are performed by calling the File classes upload method, and the API calls are done using the HTTPService class. The development web server I am using is a WAMP server, however when the application is released it will be talking to a LAMP server.
Thanks,
Grant
Here is the code I'm using to upload the file:
protected function btnAddFile_clickHandler(event:MouseEvent):void
{
// Create a new File object and display the browse file dialog
var uploadFile:File = new File();
uploadFile.browseForOpen("Select File to Upload");
uploadFile.addEventListener(Event.SELECT, uploadFile_SelectedHandler);
}
private function uploadFile_SelectedHandler(event:Event):void
{
// Get the File object which was used to select the file
var uploadFile:File = event.target as File;
uploadFile.addEventListener(ProgressEvent.PROGRESS, file_progressHandler);
uploadFile.addEventListener(IOErrorEvent.IO_ERROR, file_ioErrorHandler);
uploadFile.addEventListener(Event.COMPLETE, file_completeHandler);
// Create the request URL based on the download URL
var requestURL:URLRequest = new URLRequest(AppEnvironment.instance.serverHostname + "upload.php");
requestURL.method = URLRequestMethod.POST;
// Set the post parameters
var params:URLVariables = new URLVariables();
params.name = "filename.ext";
requestURL.data = params;
// Start uploading the file to the server
uploadFile.upload(requestURL, "file");
}
Here is the code for the API calls:
private function sendHTTPPost(apiFile:String, postParams:Object, resultCallback:Function, initialCallerResultCallback:Function):void
{
var httpService:mx.rpc.http.HTTPService = new mx.rpc.http.HTTPService();
httpService.url = AppEnvironment.instance.serverHostname + apiFile;
httpService.method = "POST";
httpService.requestTimeout = 10;
httpService.resultFormat = HTTPService.RESULT_FORMAT_TEXT;
httpService.addEventListener("result", resultCallback);
httpService.addEventListener("fault", httpFault);
var token:AsyncToken = httpService.send(postParams);
// Add the initial caller's result callback function to the token
token.initialCallerResultCallback = initialCallerResultCallback;
}
If you are on a windows system, Adobe AIR is using Microsofts WinINet library to access the web. This library by default limits the number of concurrent connections to a single server to 2:
WinInet limits the number of simultaneous connections that it makes to a single HTTP server. If you exceed this limit, the requests block until one of the current connections has completed. This is by design and is in agreement with the HTTP specification and industry standards.
... Connections to a single HTTP 1.1 server are limited to two simultaneous connections
There is an API to change the value of this limit but I don't know if it is accessible from AIR.
Since this limit also affects page loading speed for web sites, some sites are using multiple DNS names for artifacts such as images, javascripts and stylesheets to allow a browser to open more parallel connections.
So if you are controlling the server part, a workaround could be to create DNS aliases like www.example.com for uploads and api.example.com for API requests.
So as I was looking into this, I came across this info about using File.upload() in the documentation:
Starts the upload of the file to a remote server. Although Flash Player has no restriction on the size of files you can upload or download, the player officially supports uploads or downloads of up to 100 MB. You must call the FileReference.browse() or FileReferenceList.browse() method before you call this method.
Listeners receive events to indicate the progress, success, or failure of the upload. Although you can use the FileReferenceList object to let users select multiple files for upload, you must upload the files one by one; to do so, iterate through the FileReferenceList.fileList array of FileReference objects.
The FileReference.upload() and FileReference.download() functions are
nonblocking. These functions return after they are called, before the
file transmission is complete. In addition, if the FileReference
object goes out of scope, any upload or download that is not yet
completed on that object is canceled upon leaving the scope. Be sure
that your FileReference object remains in scope for as long as the
upload or download is expected to continue.
I wonder if something there could be giving you issues with uploading multiple files. I see that you are using browserForOpen() instead of browse(). It seems like the probably do the same thing... but maybe not.
I also saw this in the File class documentation
Note that because of new functionality added to the Flash Player, when publishing to Flash Player 10, you can have only one of the following operations active at one time: FileReference.browse(), FileReference.upload(), FileReference.download(), FileReference.load(), FileReference.save(). Otherwise, Flash Player throws a runtime error (code 2174). Use FileReference.cancel() to stop an operation in progress. This restriction applies only to Flash Player 10. Previous versions of Flash Player are unaffected by this restriction on simultaneous multiple operations.
When you say that you let users upload multiple files, do you mean subsequent calls to browse() and upload() or do you mean one call that includes multiple files? It seems that if you are trying to do multiple separate calls that that may be an issue.
Anyway, I don't know if this is much help. It definitely seems that what you are trying to do should be possible. I can only guess that what is going wrong is perhaps a problem with implementation. Good luck :)
Reference: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#upload()
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#browse()
Just because I was thinking about a very similar question because of an error in one of my actual apps, I decided to write down the answer I found.
I instantiated 11
HttpConnections
and was wondering why my Flex 4 Application stopped working and threw an HTTP-Error although it was working pretty good formerly with just 5 simultanious HttpConnections to the same server.
I tested this myself because I did not find anything regarding this in the Flex docs or on the internet.
I found that using more than 5 HTTPConnections was the reason for the Flex application to throw the runtime error.
I decided to instantiate the connections one after another as a temporally workaround: Load the next one after the other has received the data and so on.
Thats of course just temporally since one of the next steps will be to alter the responding server code in that way that it answers a request that contains the results of requests to more then one table in one respond. Of course the client application logic needs to be altered, too.