So I have a website which has three spaces for pictures: Space A, Space B, and Space C.
In Space A, I have 20 images (Images A1-A20) which I would like to rotate daily.
In Space B, I have 20 images (Images B1-B20) which I would like to change every 13 days.
In Space C, I have 20 images (Images C1-C20) which I would like to change every 20 days.
Therefore, let's say for instance I wanted my site to start based on Jan 1, 2000.
Spaces A,B,&C would have the following images based on the date:
1/1 - A1,B1,C1
1/2 - A2,B1,C1
1/3 - A3,B1,C1
...
1/12 - A12,B1,C1
1/13 - A13,B1,C1
1/14 - A14,B2,C1
1/15 - A15,B2,C1
...
1/20 - A20,B2,C1
1/21 - A1,B2,C2
1/22 - A2,B2,C2
1/23 - A3,B2,C2
...
1/26 - A6,B2,C2
1/27 - A7,B3,C2
1/28 - A8,B3,C2
etc.
Therefore, I think I need a DateTime program that finds the number of elapsed days between the starting date (Jan 1, 2000), and the date listed on the computer, divides by the appropriate number (20,260,400), uses the remainder or fraction to determine which picture is appropriate and picks it to display, but haven't a clue how to write it / where to start.
Suggestions appreciated.
Thanks, NP
If you've been looking for a reason to learn php or perl or similar, this is it. If not, then try rent-a-coder or similar. There's a lot more to getting your first program working than just the code--you need to figure out how to get the software loaded and working correctly on your server.
The good news is that you seem to know your problem statement well. I'd go to a bookstore and look at the php or perl books until you see one that seems helpful, then go for it. And we here at SO stand ready to answer all questions from true seekers...
ps. For your problem, php would be a good starting bet if you're using a Linux web server.
Larry K is correct in that you have the problem pretty well defined and will need some sort of program to do this. PHP and Perl would be good choices, but there are ways in which you could do this with virtually any language: Python, C, Ruby, .... One technique would be to run a cron job (assuming Linux or another Unix variant) to re-write your HTML file every day (at midnight or thereabouts). Another would be to write a CGI script to determine the proper images when the page is requested. The PHP way (or Perl or Python if using mod_perl or mod_python) would embed (or pull in) the code directly in the web page and execute when the page is requested. I suggest you play around and pick what works best for you and your situation.
FWIW, if it were me, and I could do it any way I liked, I'd probably write a Python script to run as a cron job. Python just because that's currently my favorite language, and doing it by a cron job minimizes the processing required to do the job by doing it just once per day. However, some web hosting companies don't provide cron (or Python :-( ); if that were the case, I'd do it with PHP embedded in the web page, assuming I could use PHP (which seems ubiquitous with web hosting companies).
Related
I have 2 questions on the case recorder.
1- I am not sure how to restart an optimizaiton from where the recorder left off. I can read in the case reader sql file etc but can not see how this can be fed into the problem() to restart.
2- this question is maybe due to my lack of knowledge in python but how can one access to the iteration number from within an openmdao component (one way is to read the sql file that is constantly being updated but there should be a more efficient way.)
You can re-load a case back via the load_case method on the problem.
See the docs for it here.
Im not completely sure what you mean by access the iteration count, but if you just want to know the number of times your components are called you can add a counter to them yourself.
There is not a programatic API for accessing the iteration count in OpenMDAO as of version 2.3
I have code that generates thumbnails from JPEGs. It pulls an image from S3 and then generates the thumbs.
One in about every 3000 files ends up looking like this. It happens in batches. The high res looks like this and they're all resized down to low res. It does not fail on resize. I can go to my S3 bucket and see that the original file is indeed intact.
I had this code written in Ruby and ported it all over to clojure hoping it would just fix my issue but it's still happening.
What would result in a JPEG that looks like this?
I'm using standard image copying code like so
(with-open [in (clojure.java.io/input-stream uri)
out (clojure.java.io/output-stream file)]
(clojure.java.io/copy in out))
Would there be any way to detect the transfer didn't go well in clojure? Imagemagick? Any other command line tool?
My guess is it is one of 2 possible issues (you know your code, so you can probably rule one out quickly):
You are running out of memory. If the whole batch of processing is happening at once, the first few are probably not being released until the whole process is completed.
You are running out of time. You may be reaching your maximum execution time for the script.
Implementing some logging as the batches are processed could tell you when the issue happens and what the overall state is at that moment.
My application must read one video track and several audio tracks, and be able to specify one section of the file and play it in loop. I have created a setup with Media Foundation, using the sequencer source and creating several topologies with the start and end point of the section I want to loop. It works, except for the fact that there is a 0.5 to 1 sec time of stabilization of the playback just when it goes back to the starting point.
First, I made it with individual audio files and one video file. This was quite bad for some files, sometimes all the files were completely out of sync, sometimes the video was frozen for several seconds, then went very fast to catch with the audio.
I had a good improvement using only one file, that includes the video and the multiple audio tracks. However, for most files, there is still a problem about the smoothness of the transition.
With a poor quality video AVI file, I could make it work smoothly, which would mean that the method I use is correct. I have noticed that the quality of the loop smoothness is strongly related to the CPU used on a file when simply playing it.
I use the "SetTopology" on the session, using a series of topologies, so normally it should preroll the next one during the playback of the current one, right ? Or am I missing something there ?
My app works also on Mac, where I have used a similar setup with AVFoundation, and it works fine with the same media files I use on Windows.
What can I do to have the looping work smoothly with better quality video on Windows ? Is there something to do about it ?
When I play the media file without looping, I notice that when I preroll it to some point, then when I hit the START button, the media starts instantly and with no glitch. Could it work better if I was using two independent simple playback setups, start the first, preroll the second, then stop the first and start the second programmatically at the looping point ?
I have a dozen load balanced cloud servers all monitored by Munin.
I can track each one individually just fine. But I'm wondering if I can somehow bundle them up to see just how much collective CPU usage (for example) there is among the cloud cluster as a whole.
How can I do this?
The munin.conf file makes it easy enough to handle this for subdomains, but I'm not sure how to configure this for simple web nodes. Assume my web nodes are named, web_node_1 - web_node_10.
My conf looks something like this right now:
[web_node_1]
address 10.1.1.1
use_node_name yes
...
[web_node_10]
address 10.1.1.10
use_node_name yes
Your help is much appreciated.
You can achieve this with sum and stack.
I've just had to do the same thing, and I found this article pretty helpful.
Essentially you want to do something like the following:
[web_nodes;Aggregated]
update no
cpu_aggregate.update no
cpu_aggregate.graph_args --base 1000 -r --lower-limit 0 --upper-limit 200
cpu_aggregate.graph_category system
cpu_aggregate.graph_title Aggregated CPU usage
cpu_aggregate.graph_vlabel %
cpu_aggregate.graph_order system user nice idle
cpu_aggregate.graph_period second
cpu_aggregate.user.label user
cpu_aggregate.nice.label nice
cpu_aggregate.system.label system
cpu_aggregate.idle.label idle
cpu_aggregate.user.sum web_node_1:cpu.user web_node_2:cpu.user
cpu_aggregate.nice.sum web_node_1:cpu.nice web_node_2:cpu.nice
cpu_aggregate.system.sum web_node_1:cpu.nice web_node_2:cpu.system
cpu_aggregate.idle.sum web_node_1:cpu.nice web_node_2:cpu.idle
There are a few other things to tweak the graph to give it the same scale, min/max, etc as the main plugin, those can be copied from the "cpu" plugin file. The key thing here is the last four lines - that's where the summing of values from other graphs comes in.
HI
We am getting time outs in our asp.net application. We are using sql server 2005 as the DB.
The queries run very fast in the query analyser . However when we check the time through the profiler it shows a time that is many times more than what we get in query analyser.
(paramter sinffing is not the cause)
Any help is much appreciated
thanks
We are on a SAN
Cleared the counters. The new counters are
ASYNC_NETWORK_IO 540 9812 375 78
WRITELOG 70 1828 328 0
The timeout happens only on a particular SP which a particular set of params. if we change the params and access the app it works fine. We ran the profiler and found that the SP batchcompleted statement comes up in the profiler after the timeout happens on asp.net side. If we restart the server everything works fine
if we remove the plan from the cache the app works fine. However we have taken into consideration parameter sniffing in the sp. what else could be the reason
If I was to take a guess, I would assume that the background database load from the webserver is elevating locks and causing the whole thing to slow down. Then you take a large-ish query and run it and that causes lock (and resource) contension.
I see this ALL THE TIME with companies complaining of performance problems with their client-server applications when going from one SQL server to a cluster. In the web-world, we get those issues much earlier.
The solution (most times) to lock issues with one of the following:
* Refactor your queries to work better (storing SCOPE_IDENTITY instead of calling it 5 times for example)
* Use the NO LOCK statement everywhere it makes sense.
EDIT:
Also, try viewing the server with the new 2008 SQL Management Studio 'Activity Monitor'. You can find it by right-clicking on your server and selecting 'Activity Monitor'.
Go to the Processes section and look at how many processes are 'waiting'. Your wait time should be near-0. If you see alot of stuff under 'Wait Type', post a screen shot and I can give you an idea of what the next step is.
Go to the Resource Waits section and see what the numbers look like there. Your waiters should always be near-0.
And 'Recent Expensive Queries' is awesome to look at to find out what you can do to improve your general performance.
Edit #2:
How much slower is it? Your SAN seems to be taking up about 10 seconds worth, but if you are talking 20 seconds vs. 360 seconds, then that would not be relevent, and there is no waits for locks, so I guess I am drawing a blank. If the differene is between 1 second and 10 seconds then it seems to be network related.
Run the following script to create this stored proc:
CREATE PROC [dbo].[dba_SearchCachedPlans]
#StringToSearchFor VARCHAR(255)
AS
/*----------------------------------------------------------------------
Purpose: Inspects cached plans for a given string.
------------------------------------------------------------------------
Parameters: #StringToSearchFor - string to search for e.g. '%<MissingIndexes>%'.
Revision History:
03/06/2008 Ian_Stirk#yahoo.com Initial version
Example Usage:
1. exec dba_SearchCachedPlans '%<MissingIndexes>%'
2. exec dba_SearchCachedPlans '%<ColumnsWithNoStatistics>%'
3. exec dba_SearchCachedPlans '%<TableScan%'
4. exec dba_SearchCachedPlans '%CREATE PROC%MessageWrite%'
-----------------------------------------------------------------------*/
BEGIN
-- Do not lock anything, and do not get held up by any locks.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 100
st.TEXT AS [SQL],
cp.cacheobjtype,
cp.objtype,
DB_NAME(st.dbid) AS [DatabaseName],
cp.usecounts AS [Plan usage],
qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
WHERE CAST(qp.query_plan AS NVARCHAR(MAX)) LIKE #StringToSearchFor
ORDER BY cp.usecounts DESC
END
Then execute:
exec dba_SearchCachedPlans '%<MissingIndexes>%'
And see if you are missing any recommended indexes.
When SQL server creates a plan it saves it, along with any recommended indexes. Just click on the query_plan column text to show you the graph. On the top there will be recommended indexes you should implement.
I don't have the answer for you, because I'm not a guru. But I do remember reading on some SQL blogs recently that SQL 2008 has some extra things you can add to the query/stored procedure so it calculates things differently. I think one thing you could try searching for is called 'hints'. Also, how SQL uses the current 'statistics' makes a difference too. Look that up. And how the execution plan is only generated for the first run--if that plan doesn't work with different parameter values because there would be a vast difference in what would be searched/returned, it can present this behavior I think.
Sorry I can't be more helpful. I'm just getting my feet wet with SQL Server performance at this level. I bet if you asked someone like Brent Ozar he could point you in the right direction.
I've had this exact same issue a couple of times before. It seemed to happen to me when a particular user was on the site when it was deployed. When that user would run certain stored procedures with their ID it would timeout. When others would run it, or I would run it from the DB, it would run in no time. We had our DBA's watch everything they could and they never had an answer. In the end, everything was fixed whenever I re-deployed the site and the user was not already logged in.
I've had similar issues and with my case it had to do with the SP recompiling. Specifically it was my use of temp tables vs table variables.