storage limit of maximo anywhere application in offline mode - maximo-anywhere

I would like to know if Maximo anywhere application is running in offline mode then how much data it can save? The scenario is that, engineer visit remote plants for around weeks (Oil & Gas plants) and they will be using anywhere app in disconnected mode and do 100s of transactions. Will anywhere application store so many records in offline mode inside the device or it will crash? Or is there any way to configure this size limit.
Appreciate any help in this regards.

Usually we download about 100MB of data to the device from the Maximo server. The large majority of the data is lookup related (assets, failure codes, locations), not work list or transactional related data.

Related

How to disable accessing internet particular programmes (Aplllication) in windows 8

I'm using limited data size broadband connections. So I want to disable all programmes (applications) to accessing internet in windows 8 other than browsers. so that I can save data usage by unwanted application.
The WinRT applications are sandboxed, which means they have a limited range of operations within the system and those operations interact with the application itself, only.
You cannot dictate to close an existing app which is running from a WinRT app. Only the user can do this.
At most, you should try to get the status of the network usage and if it is high, show the user a specific message, something like : At the moment, the data usage is relatively high, for better experience, try to close some the apps which cause this.

Managing session state on site with several servers?

I am currently in the design phase where we have several servers that make up our system and I need to work out what is the best way to store session information so that each of the servers can get access to it. Session information is basically a guid for the session id and a list of user group names.
The main client applications are web and mobile apps.
I have the following configuration:
A master server where all users log in and the session object is filled with the user group information that corresponds to the user. There could be up to 10,000 users at peak log in.
Slave servers which contain archiving content and most users would then via the UI be talking directly to the slave servers. The slave servers need the session information which is initially determined on the master server.
One option was to move the session data for each log in to the slave servers and cache it on the slave, then each slave could work independently and it will not need to reference the master server.
The other option is to have central database which contains the session information but since our databases are on each server (we don't have a separate machine to act as a database server), each slave will have a remote connection string back to the master server database. No doubt this will slow things down if I have to query the database remotely from the slave server.
I then have the situation where I need to clean up sessions but in general I don't expect more that 25 MB of data at the peak log in.
We could have up to 10 slave servers.
What is the best solution?
For IIS 7.0 here is a technet article that outlines two approaches (a session server or using SQL server):
http://technet.microsoft.com/en-us/library/cc754032(v=ws.10).aspx
I question the need to have 10 web servers running with 10 seperate databases running on them. Granted, I don't know anything about the application you're writing and there may very well be a good reason for it.
Here's how I'm seeing it (with my admitted limited knowledge of your application).
10,000 possible concurrent users hitting 1 authentication server that will then redirect them to one (or more?) of 10 servers could potentially cause bottlenecks. What if a majority go to one of the servers? What if a large number all try to log in at the same time?
Here's a stab at a different architecture:
[LoadBalancer]
-------------------------------------------------------------------------
[WebServer] [WebServer] [WebServer] -------------> [SessionServer]
[LoadBalancer]
-------------------------------------------------------------------------
[AppServer] [AppServer] [AppServer] [AppServer] -------^
-------------------------------------------------------------------------
[DBServer]
[DBServer](backup)
I write that not knowing what class of machines these are; they may not be suitable to be a DB server.
Well, it's early here and I'm only on my second cup of coffee. That may or may not be helpful, I hope it is.

Flex: Karaoke app out of sync on playback,after publishing on an external server

I'm trying to create a Karaoke app in my spare time, but i'm having some problem syncing the recording and the backing track.
Basically, after i start to publish to an external media server (wowza), when i play back with the recording (mp3, ripped from the recording flv) and the backing track (mp3 too), i notice a delay of 0-800 ms between the 2 track, which is a big time for this type of application.
This delay is always random, as bigger in windows than macs.
I already tried many solution, amongs the others:
Playback with the Sound.extract method, taking 2048 samples each time, to have less latency for the audio start/processing;
Set the microphone silence level to 0;
Sync by cuepoint, in different cases in flv or in the mp3 (extending the sound class)
But every time, i get mixed results (worse results in windows, best results in mac)
Anyone has some suggestion to give? Any help would be appreciated :-)

Efficient reliable incremental HTTP multi-file (or whole directory) upload software

Imagine you have a web site that you want to send a lot of data. Say 40 files totaling the equivalence of 2 hours of upload bandwidth. You expect to have 3 connection losses along the way (think: mobile data connection, WLAN vs. microwave). You can't be bothered to retry again and again. This should be automated. Interruptions should not cause more data loss than neccessary. Retrying a complete file is a waste of time and bandwidth.
So here is the question: Is there a software package or framework that
synchronizes a local directory (contents) to the server via HTTP,
is multi-platform (Win XP/Vista/7, MacOS X, Linux),
can be delivered as one self-contained executable,
recovers partially uploades files after interrupted network connections or client restarts,
can be generated on a server to include authentication tokens and upload target,
can be made super simple to use
or what would be a good way to build one?
Options I have found until now:
Neat packaging of rsync. This requires an rsync (server) instance on the server side that is aware of a privilege system.
A custom Flash program. As I understand, Flash 10 is able to read a local file as a bytearray (indicated here) and is obviously able to speak HTTP to the originating server. Seen in question 1344131 ("Upload image thumbnail to server, without uploading whole image").
A custom native application for each platform.
Thanks for any hints!
Related work:
HTML5 will allow multiple files to be uploaded or at least selected for upload "at once". See here for example. This is agnostic to the local files and does not feature recovery of a failed upload.
Efficient way to implement a client multiple file upload service basically asks for SWFUpload or YUIUpload (Flash-based multi-file uploaders, otherwise "stupid")
A comment in question 997253 suggests JUpload - I think using a Java applet will at least require the user to grant additional rights so it can access local files
GearsUploader seems great but requires Google Gears - that is going away soon

What's the best solution for file storage for a load-balanced ASP.NET app?

We have an ASP.NET file delivery app (internal users upload, external users download) and I'm wondering what the best approach is for distributing files so we don't have a single point of failure by only storing the app's files on one server. We distribute the app's load across multiple front end web servers, meaning for file storage we can't simply store a file locally on the web server.
Our current setup has us pointing at a share on a primary database/file server. Throughout the day we robocopy the contents of the share on the primary server over to the failover. This scneario ensures we have a secondary machine with fairly current data on it but we want to get to the point where we can failover from the primary to the failover and back again without data loss or errors in the front end app. Right now it's a fairly manual process.
Possible solutions include:
Robocopy. Simple, but it doesn't easily allow you to fail over and back again without multiple jobs running all the time (copying data back and forth)
Store the file in a BLOB in SQL Server 2005. I think this could be a performance issue, especially with large files.
Use the FILESTREAM type in SQL Server 2008. We mirror our database so this would seem to be promising. Anyone have any experience with this?
Microsoft's Distributed File System. Seems like overkill from what I've read since we only have 2 servers to manage.
So how do you normally solve this problem and what is the best solution?
Consider a cloud solution like AWS S3. It's pay for what you use, scalable and has high availability.
You need a SAN with RAID. They build these machines for uptime.
This is really an IT question...
When there are a variety of different application types sharing information via the medium of a central database, storing file content directly into the database would generally be a good idea. But it seems you only have one type in your system design - a web application. If it is just the web servers that ever need to access the files, and no other application interfacing with the database, storage in the file system rather than the database is still a better approach in general. Of course it really depends on the intricate requirements of your system.
If you do not perceive DFS as a viable approach, you may wish to consider Failover clustering of your file server tier, whereby your files are stored in an external shared storage (not an expensive SAN, which I believe is overkill for your case since DFS is already out of your reach) connected between Active and Passive file servers. If the active file server goes down, the passive may take over and continue read/writes to the shared storage. Windows 2008 clustering disk driver has been improved over Windows 2003 for this scenario (as per article), which indicates the requirement of a storage solution supporting SCSI-3 (PR) commands.
I agree with Omar Al Zabir on high availability web sites:
Do: Use Storage Area Network (SAN)
Why: Performance, scalability,
reliability and extensibility. SAN is
the ultimate storage solution. SAN is
a giant box running hundreds of disks
inside it. It has many disk
controllers, many data channels, many
cache memories. You have ultimate
flexibility on RAID configuration,
adding as many disks you like in a
RAID, sharing disks in multiple RAID
configurations and so on. SAN has
faster disk controllers, more parallel
processing power and more disk cache
memory than regular controllers that
you put inside a server. So, you get
better disk throughput when you use
SAN over local disks. You can increase
and decrease volumes on-the-fly, while
your app is running and using the
volume. SAN can automatically mirror
disks and upon disk failure, it
automatically brings up the mirrors
disks and reconfigures the RAID.
Full article is at CodeProject.
Because I don't personally have the budget for a SAN right now, I rely on option 1 (ROBOCOPY) from your post. But the files that I'm saving are not unique and can be recreated automatically if they die for some reason so absolute fault-tolerance is necessary in my case.
I suppose it depends on the type of download volume that you would be seeing. I am storing files in a SQL Server 2005 Image column with great success. We don't see heavy demand for these files, so performance is really not that big of an issue in our particular situation.
One of the benefits of storing the files in the database is that it makes disaster recovery a breeze. It also becomes much easier to manage file permissions as we can manage that on the database.
Windows Server has a File Replication Service that I would not recommend. We have used that for some time and it has caused alot of headaches.
DFS is probably the easiest solution to setup, although depending on the reliability of your network this can become un-synchronized at times, which requires you to break the link, and re-sync, which is quite painful to be honest.
Given the above, I would be inclined to use a SQL Server storage solution, as this reduces the complexity of your system, rather then increases it.
Do some tests to see if performance will be an issue first.

Resources