My Symfony application runs calculations based on a user's request. I'd like to send them an email as a response.
I have created a custom channel and handler in config.yml:
# config.yml
# ...
monolog:
handlers:
buildbot:
level: info
type: stream
channels: [buildbot]
Now I write logs to it from various services:
<?php
// AppBundle/Services/BuildBot.php
$this->buildLogger->info('Fabricating robot shell');
In a service I want to email the requestor with log lines from the "buildbot" Monolog channel. How can I read the log lines?
From a design perspective I don’t think that Symfony’s logger is the right tool to use for this task. In my opinion that logger is meant to log information about your application’s activities that may or may not be useful to you as the developer (or other kinds of administrators).
Whereas in your use case the log is meant for the end user and doesn’t really contain application-level information but request-level information. I would separate that.
My personal approach would be to create a simple service (that might even implement the logger interface) that accepts those messages, subscribes to the kernel.terminate event and sends the combined messages to the user in the end.
Anyway, if you really want to do this with Monolog, you should look into its handlers. Here is a list of available handlers whereas here and here are Symfony-specific examples on how to configure them. You probably have to write your own mailing handler because the packaged one assumes that there is a static recipient whereas you probably want the mail to be sent to the current user.
Related
I've read about the automatic configuration reload which - according to the docs - also includes security settings. What I could not figure out yet (and did not see any indications for ) is if Artemis also updates Roles etc. when the LDAP auth is active.
Question is: In an ActiveMQ Artemis deployment where OpenLDAP is used for authentication and authorization do I need to take care about updating the roles etc. myself or is this done automatically?
The documentation you cited is related to reloading broker.xml when a change is detected. It isn't really applicable to the LDAP authorization data since that data is in LDAP and not in broker.xml. However, the documentation for the LegacyLDAPSecuritySettingsPlugin is relevant as it discusses the enableListener option:
enableListener. Whether or not to enable a listener that will automatically receive updates made in the LDAP server and update the broker's authorization configuration in real-time. The default value is true.
Since enableListener defaults to true then changes made to your LDAP authorization data should automatically be reflected in the broker.
The listener is an implementation of both javax.naming.event.NamespaceChangeListener and javax.naming.event.ObjectChangeListener and is registered using the javax.naming.event.EventDirContext#addNamingListener(java.lang.String, java.lang.String, javax.naming.directory.SearchControls, javax.naming.event.NamingListener) method.
That said, you may run into ARTEMIS-2671 which will be resolved in the next release (i.e. 2.12.0). It's also possible that your particular LDAP server doesn't actually support this listener functionality. If that's the case then restarting the broker is your only option to reload the LDAP data. Modifying broker.xml won't reload it.
I've got a project written in Symfony 4 (can update to the latest version if needed). In it I have a situation similar to this:
There is a controller which sends requests to an external system. It goes through records in the DB and sends a request for every row. To do that there is an MagicApiConnector class which connects to the external system, and for every request there is a XxxRequest class (like FooRequest, BarRequest, etc).
So, something like this general:
foreach ( $allRows as $row ) {
$request = new FooRequest($row['a'], $row['b']);
$connector->send($request);
}
Now in order to do all the parameter filling magic, the requests need to access a service which is defined in Symfony's DI. The controller itself neither knows nor cares about this service, but the requests need it.
How can my request classes access this service? I don't want to set it as a dependency of the controller - I could, but it kinda seems awkward, as the controller really doesn't care about it and would only pass it through. It's an implementation detail of the request, and I feel like it shouldn't burden the users of the request with this boilerplate requirement.
Then again, sometimes you need to make a sacrifice in the name of the greater good, so perhaps this is one of those cases? It feels like I'm "going against the grain" and haven't grasped some ideological concept.
Added: OK, the full gory details, no simplification.
This all is happening in the context of two homebrew systems. Let's call them OldApp and NewApp. Both are APIs and NewApp is calling into the OldApp. The APIs are simple REST/JSON style. OldApp is not built on Symfony (mostly even doesn't use a framework), the NewApp is. My question is about NewApp.
The authentication for OldApp APIs comes in three different flavors and might get more in the future if needed (it's not yet dead!) Different API calls use different authentication methods; sometimes even the same API call can be used with different methods (depending on who is calling it). All these authentication methods are also homebrew. One uses POST fields, another uses custom HTTP headers, don't remember about the third.
Now, NewApp is being called by an Android app which is distributed to many users. Android app actually uses both NewApp and OldApp. When it calls NewApp it passes along extra HTTP headers with authentication data for OldApp (method 1). Thus NewApp can impersonate the Android app user for OldApp. In addition, NewApp also needs to use a special command of OldApp that users themselves cannot call (a question of privilege). Therefore it uses a different authentication mechanism (method 2) for that command. The parameters for that command are stored in local configuration (environment variables).
Before me, a colleague had created the scheme of a APIConnector and APICommand where you get the connector as a dependency and create command instances as needed. The connector actually performs the HTTP request; the commands tell it what POST fields and what headers to send. I wish to keep this scheme.
But now how do the different authentication mechanisms fit into this? Each command should be able to pass what it needs to the connector; and the mechanisms should be reusable for multiple commands. But one needs access to the incoming request, the other needs access to configuration parameters. And neither is instantiated through DI. How to do this elegantly?
This sounds like a job for factories.
function action(MyRequestFactory $requestFactory)
{
foreach ( $allRows as $row ) {
$request = $requestFactory->createFoo($row['a'], $row['b']);
$connector->send($request);
}
The factory itself as a service and injected into the controller as part of the normal Symfony design. Whatever additional services that are needed will be injected into the factory. The factory in turn can provide whatever services the individual requests might happen to need as it creates the request.
when I use the cloudify(2.7) to deploy an application(e.g. an application app includes two services A and B ),I try to use the Admin.addEventListener() to add some eventListener,but it does't work !
I try to add the ProcessingUnitStatusChangedEventListener ,when I debug the code,the value of (ProcessingUnitStatusChangedEvent)event.getNewStatus() changes from SCHEDULED to INTACT,then SCHEDULED,then INTACT again,
I also try to add the ProcessingUnitInstanceLifecycleEventListener,when I debug the code,the status is intact,but the service is not available!
Is there any other listener or method to know the application(not the services) is available,or I use the listener in the wrong way?
First, the Admin API is internal - use it at your own risk. And you should not be using it the way you are - Cloudify adds a lot of logic on top of the internal Admin API.
Second, it is not exactly clear where you are executing your code from.
You can always use the rest client to get an accurate state of the application. Look at https://github.com/CloudifySource/cloudify/blob/master/rest-client/src/main/java/org/cloudifysource/restclient/RestClient.java#L388
In addition, if you are running this code in a service lifecycle event handler, the easiest way to implement this is to have your 'top' level service, the one that should be available last, write an application entry to the shared attributes store in its 'postStart' event. Everyone else can just periodically poll on this entry. The polling itself is very fast, all in-memory operations.
If you do not have a top-level service, or your logic is more complicated then that, you would need to use the Service Context API to scan each service and its instances to see if they are up. An explanation on getting service instance state is available here:
cloudify service dependsOn other service
Is there any error logging option in APIGee?
I am enabling a proxy endpoint in APIGee for a customer. If there is any error in the flow, how can I log it to a persistent store?
Specifically, I am using javascript policies and parsing some of the returned by back-end service, format to a different format. If there is any parsing error, where and how can I log it?
I am able to catch the error with try catch block.
Can I send an email in the catch block to a specific email address?
Thanks,
Deepak
You can catch the error in your catch block and then set the appropriate variable to contain the detailed error message you might have ran into.You can use syslog/messagelogging policy to send all the request details along with any parsing exception that you might have. You need some kind of a logging server at your end to be sent the logs to or you could also use public log management services, such as loggly. Refer to this section for more details - http://apigee.com/docs/api-services/content/log-messages-using-messagelogging
Vineet I think you are looking to solve the following problems here:
During the coding process you want to debug and understand what is going on? Something that lets you log.debug equivalent.
You want to also trigger exception flows or external processes when things go wrong in your proxy flow.
For #1 you can assign variables and trace using apigee's debug view. Any flow variable assignment in Apigee policies is printed in the debug view, if the policy is executed. So that provides you with log.debug mechanism, whenever you trace.
For #2 you can take a variety of approaches based on rest of your systems and processes. The previous answer by #Mike Dunker is a good approach. I can suggest a few more alternates
As an alternate you can also use Apigee's analytics views to monitor errors, albeit after the fact.
You can set up external monitoring scripts on your backend or on the apigee endpoint based on some synthetic transaction that touches upon the entire logic of your flow. When the services return error you can measure metrics and/or raise alert from your monitoring system.
If you have an on-premise installation of Apigee, you may want to consume apigee log files on the servers and raise appropriate actions on errors.
You can raise a desired error message to a JMS queue when error happens. Refer to Apigee's JMS support here
I am designing a Biztalk solution which requires client applications to subscribe and receive only a certain subset of event messages depending on their user permissions. Subscription will be done through topic or content based routing. The client will subscribe once and receive many messages until they choose to unsubscribe.
Client applications will number in the 100s and subscribed topics could change on a regular basis, so defining an individual send port from Biztalk for each reciever isn't a viable solution.
I have thought I could build an additional message broker service which holds the individual client subscriptions and distributes messages sent from a biztalk port.
I have also seen that a recipient list pattern can be build using orchestrations. This appears to me to still follow a request-response pattern though and I am after 1 way subscribe message to many returned event messages.
My message broker solution seems to me to be doubling up on what Biztalk should be good at so I imagine I am missing some important functionality somewhere. Has anyone tried such an application before and can give some pointers? Should I be investingating the ESB toolkit as a solution? I have had a look on the net but nothing makes it very clear for this type of topic-subscription model.
Thanks,
Phil
Do take a look at the ESB Toolkit. You can use the itinerary functionality that it adds to BizTalk, either with one of the built-in resolvers (e.g., UDDI) or with your own custom resolver. This allows you to route messages based on configuration (stored in Business Rules or elsewhere).
You will find a developer-oriented overview video of the ESB Toolkit on MSDN, which is a decent introduction to the design process and tooling. There are several other helpful videos there as well.
Your specific scenario can accomplished with a single itinerary, as described here. Use a receive pipeline with the ESB Dispatch Disassembler component, configure multiple resolvers, and for each resolver a new message is produced.
There are also two samples to look at:
The Itinerary On-Ramp Sample - builds a set of SOAP headers that contain the itinerary that you create in the test client, loads the specific message file from disk, appends the itinerary headers to the message, and submits it to the ESB through an Itinerary on-ramp for processing.
The Scatter-Gather Sample - Also appends SOAP headers containing the itinerary to the message, which is submitted to the ESB through an on-ramp for processing. A Broker orchestration analyzes the settings for its itinerary step, retrieves a collection of resolvers associated with the itinerary step, and for each of those resolvers resolves the service endpoint. After that, the orchestration activates the proper ServiceDispatcher orchestration instances to dispatch the outbound request messages.
You should also look at "How to: Route a Single Message to Multiple Recipients Using an Itinerary Routing Slip" or perhaps look into creating a custom itinerary message service (documentation is here).