AppFabric 1.1 How Many DataCacheServerEndpoint's Should Client Connect To? - iis-7

The AppFabric 1.1 client documentation discusses assigning a list of DataCachServer endpoints to the DataCacheFactoryConfiguration. Most of the examples show the list consisting of a single or perhaps two different cache servers. If the cluster consists of n servers should the client register each of the servers? Does it matter what order the servers are registered in? For example, if I have 50 servers in my web tier, and 5 servers in my cache tier, do each of the 50 web servers register all 5 caching servers? Here is sample code:
// Declare array for cache host(s).
DataCacheServerEndpoint[] servers = new DataCacheServerEndpoint[5];
servers[0] = new DataCacheServerEndpoint("Cache01", 22233);
servers[1] = new DataCacheServerEndpoint("Cache02", 22233);
servers[2] = new DataCacheServerEndpoint("Cache03", 22233);
servers[3] = new DataCacheServerEndpoint("Cache04", 22233);
servers[4] = new DataCacheServerEndpoint("Cache05", 22233);
// Setup the DataCacheFactory configuration.
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
factoryConfig.Servers = servers;
// Create a configured DataCacheFactory object.
DataCacheFactory mycacheFactory = new DataCacheFactory(factoryConfig);
// Get a cache client for the cache named "default".
DataCache myDefaultCache = mycacheFactory.GetCache("default");
Can each web server register identically, and will the load be balanced across the caching tier? If a registered server becomes unavailable is the next one tried in sequence, or is it randomized? Links to supporting documentation would be helpful.
Related to load balancing, Jason Roth wrote the following [is there other documentation available]?
App fabric client is smart client and it can directly contact the server which ever server has your data. The application need not worry about load balancing. This is done using the routing client.

Based on some testing, and letting Jason Roth's comment sink in, I think the DataCacheServerEndPoint is used by the "smart client" to retrieve the list of cache cluster members when the GetCache method is called on the DataCacheFactory. The DataCache object is the thing that is smart--and it is smart in the sense that if the server used in the DataCacheServerEndpoint instantiation goes offline or otherwise becomes unavailable, the smart client still has access to the other cluster members. Therefore the purpose of a list of more than one DataCacheServerEndpoint is to provide redundancy when calling the GetCache method.
The advice is that the DataCache object should follow a singleton pattern, and not be instantiated on each request for data from the cache. Which is why there is no need to loadbalance or provide a VIP for the individual DataCacheServerEndpoints.
Instantiate as many DataCacheServerEndPoints as needed to ensure at least one is up at all times--there is no need to add every member of the cache cluster unless that is the only way to ensure at least one is up.
When it comes to administering boxes in the cache cluster (for instance, applying monthly patches), consider minimizing the cache thrashing and rebalancing by administering a single box at a time, rather than attempting to administer groups of boxes in "waves".

Related

Mock real gRPC server responses

We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();

Can we get a configurationClient and/or secretClient from a configuration builder object in AzureFunction .net core?

I was working on a project which required me to create Keyvault reference in azure AppConfiguration, add/update secrets in KeyVault and to access values in AppConfiguration using Configuration.
Currently, I'm using :
-ConfigurationClient to create key Vault reference.
-SecretClient to add/update secrets in KeyVault.
-Configuration build using the builder.AddAzureAppConfiguration().build() to access values in AppConfiguration.(using builder.AddAzureAppConfiguration() is a necessity due to its features).
So, basically 3 connections to azure are made here. Is there any way to decrease the no. of connections? Like, using the ConfigurationBuilder to get a ConfigurationClient and/or SecretClient.
Since your application is accessing two different resources, App Configuration and Key Vault, a minimum of two connections are needed. This is due to lack of support for shared connections across different services.
Assuming your application is using ConfigureKeyVault to access Key Vault references, the call to AddAzureAppConfiguration().Build() is actually creating two connections - one to App Configuration and the other to Key Vault. In this case, there are a total of 4 connections. You can reduce it to 3 by registering the SecretClient you created to add/update secrets in Key Vault in the AddAzureAppConfiguration method.
SecretClient secretClient = new SecretClient(new Uri("http://my-keyvault-uri"), new DefaultAzureCredential());
builder.AddAzureAppConfiguration(options =>
{
options.Connect(settings["connection_string"])
.ConfigureKeyVault(kv => kv.Register(secretClient));
});
At this time, there isn't a supported way to provide an existing instance of ConfigurationClient while setting up the AddAzureAppConfiguration method, but this may be supported in the future.

How to get current channel domain in pipeline based on the current application

In pipeline, I need to get channel domain to which current application is assigned.
I get current ApplicationBO instance but I haven't been able to get channel domain from it (I tried inspecting it in debugger, but I can only get domain for the application but not for the channel).
This is how currently applications and channels are assigned:
Company organization:
Channel 1
App 1 <--- Get Channel1 if in this app
Channel 2
App 2 <--- Get Channel2 if in this app
Both applications share the common cartridge which contains pipeline in which I need to get current channel
There are two options:
Call pipeline DetermineRepositories-Channel which returns you a Repository object (that is the Channel). On the Repository use object path Repository:RepositoryDomain to get the Domain. I'm not sure how big the performance implication is though..
Use object path ApplicationBO:Extension("PersistentObjectBOExtension"):PersistentObject:Domain to get the owning domain of the application itself. That will always be the channel(Domain). Because that's where storefront applications are born.
In case you need to convert the Domain object to a Repository object you can use pipelet GetRepositoryByRepositoryDomain.

Adobe AIR HTTP Connection Limit

I'm working on an Adobe AIR application which can upload files to a web server, which is running Apache and PHP. Several files can be uploaded at the same time and the application also calls the web server for various API requests.
The problem I'm having is that if I start two file uploads, while they are in progress any other HTTP requests will time out, which is causing a problem for the application and from a user point of view.
Are Adobe AIR applications limited to 2 HTTP connections, or is something else probably the issue?
From searching about this issue I've not found much but one article did indicated that it wasn't limited to just two connections.
The file uploads are performed by calling the File classes upload method, and the API calls are done using the HTTPService class. The development web server I am using is a WAMP server, however when the application is released it will be talking to a LAMP server.
Thanks,
Grant
Here is the code I'm using to upload the file:
protected function btnAddFile_clickHandler(event:MouseEvent):void
{
// Create a new File object and display the browse file dialog
var uploadFile:File = new File();
uploadFile.browseForOpen("Select File to Upload");
uploadFile.addEventListener(Event.SELECT, uploadFile_SelectedHandler);
}
private function uploadFile_SelectedHandler(event:Event):void
{
// Get the File object which was used to select the file
var uploadFile:File = event.target as File;
uploadFile.addEventListener(ProgressEvent.PROGRESS, file_progressHandler);
uploadFile.addEventListener(IOErrorEvent.IO_ERROR, file_ioErrorHandler);
uploadFile.addEventListener(Event.COMPLETE, file_completeHandler);
// Create the request URL based on the download URL
var requestURL:URLRequest = new URLRequest(AppEnvironment.instance.serverHostname + "upload.php");
requestURL.method = URLRequestMethod.POST;
// Set the post parameters
var params:URLVariables = new URLVariables();
params.name = "filename.ext";
requestURL.data = params;
// Start uploading the file to the server
uploadFile.upload(requestURL, "file");
}
Here is the code for the API calls:
private function sendHTTPPost(apiFile:String, postParams:Object, resultCallback:Function, initialCallerResultCallback:Function):void
{
var httpService:mx.rpc.http.HTTPService = new mx.rpc.http.HTTPService();
httpService.url = AppEnvironment.instance.serverHostname + apiFile;
httpService.method = "POST";
httpService.requestTimeout = 10;
httpService.resultFormat = HTTPService.RESULT_FORMAT_TEXT;
httpService.addEventListener("result", resultCallback);
httpService.addEventListener("fault", httpFault);
var token:AsyncToken = httpService.send(postParams);
// Add the initial caller's result callback function to the token
token.initialCallerResultCallback = initialCallerResultCallback;
}
If you are on a windows system, Adobe AIR is using Microsofts WinINet library to access the web. This library by default limits the number of concurrent connections to a single server to 2:
WinInet limits the number of simultaneous connections that it makes to a single HTTP server. If you exceed this limit, the requests block until one of the current connections has completed. This is by design and is in agreement with the HTTP specification and industry standards.
... Connections to a single HTTP 1.1 server are limited to two simultaneous connections
There is an API to change the value of this limit but I don't know if it is accessible from AIR.
Since this limit also affects page loading speed for web sites, some sites are using multiple DNS names for artifacts such as images, javascripts and stylesheets to allow a browser to open more parallel connections.
So if you are controlling the server part, a workaround could be to create DNS aliases like www.example.com for uploads and api.example.com for API requests.
So as I was looking into this, I came across this info about using File.upload() in the documentation:
Starts the upload of the file to a remote server. Although Flash Player has no restriction on the size of files you can upload or download, the player officially supports uploads or downloads of up to 100 MB. You must call the FileReference.browse() or FileReferenceList.browse() method before you call this method.
Listeners receive events to indicate the progress, success, or failure of the upload. Although you can use the FileReferenceList object to let users select multiple files for upload, you must upload the files one by one; to do so, iterate through the FileReferenceList.fileList array of FileReference objects.
The FileReference.upload() and FileReference.download() functions are
nonblocking. These functions return after they are called, before the
file transmission is complete. In addition, if the FileReference
object goes out of scope, any upload or download that is not yet
completed on that object is canceled upon leaving the scope. Be sure
that your FileReference object remains in scope for as long as the
upload or download is expected to continue.
I wonder if something there could be giving you issues with uploading multiple files. I see that you are using browserForOpen() instead of browse(). It seems like the probably do the same thing... but maybe not.
I also saw this in the File class documentation
Note that because of new functionality added to the Flash Player, when publishing to Flash Player 10, you can have only one of the following operations active at one time: FileReference.browse(), FileReference.upload(), FileReference.download(), FileReference.load(), FileReference.save(). Otherwise, Flash Player throws a runtime error (code 2174). Use FileReference.cancel() to stop an operation in progress. This restriction applies only to Flash Player 10. Previous versions of Flash Player are unaffected by this restriction on simultaneous multiple operations.
When you say that you let users upload multiple files, do you mean subsequent calls to browse() and upload() or do you mean one call that includes multiple files? It seems that if you are trying to do multiple separate calls that that may be an issue.
Anyway, I don't know if this is much help. It definitely seems that what you are trying to do should be possible. I can only guess that what is going wrong is perhaps a problem with implementation. Good luck :)
Reference: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#upload()
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#browse()
Just because I was thinking about a very similar question because of an error in one of my actual apps, I decided to write down the answer I found.
I instantiated 11
HttpConnections
and was wondering why my Flex 4 Application stopped working and threw an HTTP-Error although it was working pretty good formerly with just 5 simultanious HttpConnections to the same server.
I tested this myself because I did not find anything regarding this in the Flex docs or on the internet.
I found that using more than 5 HTTPConnections was the reason for the Flex application to throw the runtime error.
I decided to instantiate the connections one after another as a temporally workaround: Load the next one after the other has received the data and so on.
Thats of course just temporally since one of the next steps will be to alter the responding server code in that way that it answers a request that contains the results of requests to more then one table in one respond. Of course the client application logic needs to be altered, too.

One Flex client connecting to two webapps using BlazeDS - Detected duplicate HTTP-based FlexSessions

I have a flex application that communicates via BlazeDS with two webapps running inside a single instance of Tomcat.
The flex client is loaded by the browser from the first webapp and all is well. However on the initial call to the second webapp the client receives the following error:
Detected duplicate HTTP-based FlexSessions, generally due to the remote host disabling session cookies. Session cookies must be enabled to manage the client connection correctly.
Subsequent calls to the same service method succeed.
I've seen a few posts around referring to the same error in the context of two flex apps calling a single webapp from the same browser page, but nothing which seems to help my situation - so I'd be very grateful if anyone could help out....
Cheers, Mark
Three potential solutions for you:
I found once that if I hit a remote object before setting up a messaging channel then the CientID would get screwed up. Try to establish an initial messaging channel once the application loads, and before any remote object calls are made.
Flash Builder's network monitoring tool can cause some problems with BlazeDS. I set up a configuration option on application load that checks to see if I'm in the dev environment (it is called just before setting up my channel from #1). If I'm in dev, I assign a UID manually. For some reason this doesn't take well outside the dev environment... been awhile since I set it all up so I can't remember the finer points as to why:
if (!(AppSettingsModel.getInstance().dev))
FlexClient.getInstance().id = UIDUtil.createUID();
BlazeDS by default only allows for a single HTTP session to be setup per client/browser. In my streaming channel definitions I added the following to allow for additional sessions per browser:
<channel-definition id="my-secure-amf-stream" class="mx.messaging.channels.SecureStreamingAMFChannel">
<endpoint url="https://{server.name}:{server.port}/FlexClient/messagebroker/securestreamingamf"
class="flex.messaging.endpoints.SecureStreamingAMFEndpoint"/>
<properties>
<add-no-cache-headers>false</add-no-cache-headers>
<idle-timeout-minutes>0</idle-timeout-minutes>
<max-streaming-clients>10</max-streaming-clients>
<server-to-client-heartbeat-millis>5000</server-to-client-heartbeat-millis>
<user-agent-settings>
<user-agent match-on="MSIE" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
<user-agent match-on="Firefox" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
</user-agent-settings>
</properties>
Problem: Duplicate session errors when flex.war and Livecycle.lca files are hosted in separate JVMs on WebSphere Server.
Solution:
Inside the command file for the event, set FlexClientId to null in execute method before calling remote service (Java method or LC Process).
Guess this approach can be used in other scenarios as well to prevent Duplicate session errors.
EventCommand.as file
—————————–
import mx.messaging.FlexClient;
//other imports as per your code
public function execute(event:CairngormEvent):void
{
var evt:EventName = event as EventName ;
var delegate:Delegate = new DelegateImpl(this as IResponder);
//***set client ID to null
FlexClient.getInstance().id = null;
delegate.functionName(evt.data);
}

Resources