Here's the code I use:
public partial class Form1 : Form
{
private ILPlotCube plotcube_ = null;
private ILSurface surface_ = null;
public Form1()
{
InitializeComponent();
ilPanel1.Driver = RendererTypes.OpenGL;
}
private void ilPanel1_Load(object sender, EventArgs e)
{
var scene = new ILScene();
plotcube_ = scene.Add(new ILPlotCube(twoDMode: false));
plotcube_.MouseDoubleClick += PlotCube_MouseDoubleClick;
ilPanel1.Scene = scene;
}
private void PlotCube_MouseDoubleClick(object sender, ILMouseEventArgs e)
{
ResetSurface();
e.Cancel = true;
e.Refresh = true;
}
private void ResetSurface()
{
using (ILScope.Enter())
{
ILArray<float> array = ILMath.tosingle(ILSpecialData.sincf(1000, 1000));
if (surface_ == null)
{
surface_ = new ILSurface(0);
surface_.Fill.Markable = false;
surface_.Wireframe.Visible = false;
plotcube_.Add(surface_);
}
surface_.UpdateColormapped(array);
surface_.UseLighting = false;
}
plotcube_.Plots.Reset();
}
}
Each call to ResetSurface() takes a few seconds to complete: ~6s in Debug and ~4s in Release mode.
Once the surface is updated, though, rotation and pan operations are very fluid.
The smaller the surface, the faster the update.
Is there a more efficient way to update the surface positions/colors buffers?
Note: using IlNumerics 3.2.2 Community Edition on Windows 7 laptop with dual graphics (Intel HD 4000 + GeForce GT 650M), with nvidia card activated.
There is nothing obviously wrong with your code. A common pitfall is the wireframe color. If it is left to be semitransparent (default), the necessary sorting would slow the rendering down. But you have already set it to Visible = false.
So on my machine (win 7, T430 notebook, i7 and similar graphics) it takes <2 sec to update (Release with no debugger attached!). I am afraid, that's just what it takes. There is a lot of stuff going on in the back ...
#Edit It might be faster to precompute the colors and provide them as discrete color using ILSurface.UpdateRGBA(). You will have to try and use a profiler to investigate the bottleneck. Another option - since you are after a simple imagesc-style plot - is to build the imagesc on your own: ILTriangles(-strip) ist much more slim and probably gives more option to increase the update speed. However, you will have to do a considerable amount of reordering / vertex generation / color computation on your own. Also, this won't give you the colorbar support of ILSurface.
#Edit: You can use the ILImageSCPlot class as a slim replacement for ILSurface. The documentation is here: http://ilnumerics.net/imagesc-plots.html
Related
I'm trying to make a prototype for my interactive media class but I hit a little hiccup on the progress. I was following a tutorial where everything was running smoothly till I got to using the Animator. I followed every instruction step by step during the tutorial I was watching. Basically, my 2D Sprite Character is stuck in the fall animation whenever I play the game rather than it being in the default idle animation like it's supposed to be. I tried deleting and recreating the animation paths but that didn't work. I even tried deleting everything and putting everything in from scratch back into the animator. I did check off exit time, set the duration for zero, and give the "state" which is what I called the in their respective numbers but it's still stuck on falling. when I jump with my character the run and idle animation seem to be working. It's like everything reversed with falling. It also gave me the difference in effective length is too big as an error. I tried adding more frames to my falling animation to see if that would fix and also tried checking and unchecking loop time and It's still stuck on the fall. If anyone knows what's wrong with the animator please let me know. I don't think it has anything to do with the code but better safe than sorry so I'll put it here. If anyone has the answer to my issue please get back to me when you can and thank you!
PlayerMovement.cs: `
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class PlayerMovement : MonoBehaviour
{
private Rigidbody2D rb;
private SpriteRenderer sprite;
private Animator anim;
private float dirX = 0f;
[SerializeField] private float moveSpeed = 12f;
[SerializeField] private float jumpForce = 16f;
private enum MovementState { idle, running, jumping, falling }
// Start is called before the first frame update
private void Start()
{
rb = GetComponent<Rigidbody2D>();
sprite = GetComponent<SpriteRenderer>();
anim = GetComponent<Animator>();
}
// Update is called once per frame
private void Update()
{
dirX = Input.GetAxisRaw("Horizontal");
rb.velocity = new Vector2(dirX * moveSpeed, rb.velocity.y);
if (Input.GetButtonDown("Jump"))
{
rb.velocity = new Vector2(rb.velocity.x, jumpForce);
}
UpdateAnimationState();
}
private void UpdateAnimationState()
{
MovementState state;
if (dirX > 0f)
{
state = MovementState.running;
sprite.flipX = false;
}
else if (dirX < 0f)
{
state = MovementState.running;
sprite.flipX = true;
}
else
{
state = MovementState.idle;
}
if (rb.velocity.y > .1f)
{
state = MovementState.jumping;
}
else if (rb.velocity.y > -.1f)
{
state = MovementState.falling;
}
anim.SetInteger("state", (int)state);
}
}
This is my Animator in Unity
The Player Falling Inspector
Player Running -> Player Falling
Player Idle -> Player Falling
Player Jumping -> Player Falling
I'm doing some tests with DotNet 5rc1 and Blazor, and have noticed something that makes me think I'm not doing this right.
I have a demo page (code below) that gives the user a set of buttons and they can click on a button to do a die roll. In the client side C# I then roll the die and prepend the roll string to a list. In the template, I then do a foreach and render the user's past rolls.
The issue is that I've been watching the Websocket messages and each time I add a new element, the message being passed gets larger and larger. This is because the entire list is being re-rendered with each click.
Is there a better way to do this, so that Blazor only inserts the new item in the DOM and sends down a trivial message, rather than re-rendering the entire section of the page? For this example, it doesn't matter much, but if I were building a bigger app with a lot of interactions, I could see this becoming a bottleneck.
Here's the code for my page - this is just a new page in a brand new Blazor project. (My code has more than a 1d20 button, but I removed it from here for brevity.)
#page "/dieroller"
<h1>Die Roller Utility</h1>
...
<button class="btn btn-primary" #onclick="#(e => RollDie(20))">
<i class="fas fa-dice-d20"></i>
</button>
<h2 class='mt-5'>Your Roll History</h2>
#foreach (string roll in rolls) {
<p>#roll</p>
}
#code {
#using System.Threading;
private List<string> rolls = new List<string>();
private void RollDie(int upper)
{
Random rand = new Random();
rolls.Insert(0, string.Format("1d{0} = {1}", upper, rand.Next(1, upper)));
}
}
The direct answer to your problem is to use #key. That is always a good idea when a loop is involved.
The problem is that you need unique keys and your strings don't fit that requirement.
A blunt solution is to introduce a class (not a struct), just to get unique keys. I adapted your code to the following:
#foreach (Roll roll in rolls)
{
<p #key="roll">#($"d={roll.Upper} {roll.Value}")</p>
}
#code {
class Roll
{
public int Upper { get; set; }
public int Value { get; set; }
}
private List<Roll> rolls = new List<Roll>();
private void RollDie(int upper)
{
Random rand = new Random();
rolls.Insert(0, new Roll { Upper = upper, Value = rand.Next(1, upper + 1) });
}
}
Using this you do get stable, non-growing WS packets.
Based on #BrianParker's comment above - Yes, you need a key so that Blazor knows which items need to be updated. That Key also needs to be unique, or you'll generate errors with key collision.
Because these are strings, they don't make good keys - therefore I ended up retooling this with a DieRoll class that I could put in a list. Here's the new code behind:
using System;
using System.Collections.Generic;
namespace BlazorApp.Pages
{
public partial class DieRoller
{
private List<DieRoll> rolls = new List<DieRoll>();
private void RollDie(int upper)
{
rolls.Insert(0, new DieRoll(upper));
}
}
public class DieRoll
{
public int Roll;
public int DieType;
public DieRoll(int DieType) {
this.DieType = DieType;
RollDie();
}
public int RollDie() {
Random rand = new Random();
this.Roll = rand.Next(1, DieType + 1);
return Roll;
}
public override string ToString()
{
return string.Format("1d{0} = {1}", this.DieType, this.Roll);
}
}
}
And here's the new template code at the p tag:
#foreach (DieRoll die in rolls) {
<p #key="die">#die</p>
}
Obviously this is more complex, but this is what works. The messages from the server are much smaller, and never grow in size.
Also, this might NOT have mattered if I wasn't prepending the items to the List. Appending to the list might have let Blazor understand where the elements were being created easier. But I didn't bother to test that theory.
You can see another example here: https://blazor-university.com/components/render-trees/optimising-using-key/
With List.Add the messages size remains the same, so why is List.Insert any different? Presumably it's due to the inherent performance penalty that List.Insert(index: 0) incurs since the underlying array needs to be reshuffled, see here: https://stackoverflow.com/a/18587349/4000335
To verify this even using List.Insert(List.Count) shows the message size remains the same (as is the case with Add).
If the requirement is to have the last roll on top then an optimal approach would probably involve JavaScript, which defeats the purpose of using blazor server in the first place, however, blazor webassembly doesn't need to communicate with a server to accomplish this since it's all done on the client.
I am trying to understand and implement a piece of code for Tiff compression.
I have already used 2 separate techniques - Using 3rd party dll's LibTiff.NEt (1st method is bulky) and the Image save method, http://msdn.microsoft.com/en-us/library/ytz20d80%28v=vs.110%29.aspx (2nd method works only on windows 7 machine but not on windows 2003 or 2008 server).
Now I am looking to explore this 3rd method.
using System.Windows.Forms;
using System.Windows.Media.Imaging;
using System.Drawing.Imaging;
int width = 800;
int height = 1000;
int stride = width/8;
byte[] pixels = new byte[height*stride];
// Try creating a new image with a custom palette.
List<System.Windows.Media.Color> colors = new List<System.Windows.Media.Color>();
colors.Add(System.Windows.Media.Colors.Red);
colors.Add(System.Windows.Media.Colors.Blue);
colors.Add(System.Windows.Media.Colors.Green);
BitmapPalette myPalette = new BitmapPalette(colors);
// Creates a new empty image with the pre-defined palette
BitmapSource image = BitmapSource.Create(
width,
height,
96,
96,
System.Windows.Media.PixelFormats.BlackWhite,
myPalette,
pixels,
stride);
FileStream stream = new FileStream(Original_File, FileMode.Create);
TiffBitmapEncoder encoder = new TiffBitmapEncoder();
encoder.Compression = TiffCompressOption.Ccitt4;
encoder.Frames.Add(BitmapFrame.Create(image));
encoder.Save(stream);
But I don't have a full understanding of what is happening here.
There is obviously some kind of a memory stream that the compression technique is being applied to. But I am a bit confused how to apply this to my specific case. I have an original tiff file, I want to use this method to set its compression to CCITT and save it back. Can anyone help?
I copied the above code and the code runs. But my end output file is a solid black background image. Although on the positive side it is of the correct compression type.
http://msdn.microsoft.com/en-us/library/ms616002%28v=vs.110%29.aspx
http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.tiffcompressoption%28v=vs.100%29.aspx
http://social.msdn.microsoft.com/Forums/vstudio/en-US/1585c562-f7a9-4cfd-9674-6855ffaa8653/parameter-is-not-valid-for-compressionccitt4-on-windows-server-2003-and-2008?forum=netfxbcl
LibTiff.net is a little bulky because it's based off LibTiff, which has its own set of problems.
My company (Atalasoft) has the ability to do that fairly easily, and the free version of the SDK will do the task you want with a few restrictions. The code for re-encoding a file would look like this:
public bool ReencodeFile(string path)
{
AtalaImage image = new AtalaImage(path);
if (image.PixelFormat == PixelFormat.Pixel1bppIndexed)
{
TiffEncoder encoder = new TiffEncoder();
encoder.Compression = TiffCompression.Group4FaxEncoding;
image.Save(path, encoder, null); // destroys the original - use carefully
return true;
}
return false;
}
Things you should be aware of:
this code will only work properly on 1bpp images
this code will NOT work properly on multi-page TIFFs
this code does NOT preserve metadata within the original file
and I would want the code to at least check for that. If you are inclined to have a solution that better preserves what's in the content of the file, you would want to do this:
public bool ReencodeFile(string origPath, string outputPath)
{
if (origPath == outputPath) throw new ArgumentException("outputPath needs to be different from input path.");
TiffDocument doc = new TiffDocuemnt(origPath);
bool needsReencoding = false;
for (int i=0; i < doc.Pages; i++) {
if (doc.Pages[i].PixelFormat == PixelFormat.Pixel1bppIndexed) {
doc.Pages[i] = new TiffPage(new AtalaImage(origPath, i, null), TiffCompression.Group4FaxEncoding);
needsReencoding = true;
}
}
if (needsReendcoding)
doc.Save(outputPath);
return needsReencoding;
}
This solution will respect all pages within the document as well as document metadata.
Good evening,
I just started playing around with C# and I tried creating a GUI for a program that runs in command line. I have been able to get it running, but now I am stuck trying to implement a progress bar to it.
I have read other post but I am unable to find the exact issue or to understand how to apply the solution to my issue.
Here is my code (apologize if this is very messy):
private void MethodToProcess(Object sender, DoWorkEventArgs args)
{
// Set all the strings for passthrough
String USMTPath_Work = USMTPath + USMTArch;
String USMTPath_full = USMTPath_Work + #"\Scanstate.exe";
String USMTFlags_Capture = #"/c /v:13 /o /l:scanstate.log /localonly /efs:copyraw";
String Argument_full = SavePath + XML1 + XML2 + USMTFlags_Capture;
// Test that USMT path is correct
if (USMTPath == null)
{
MessageBox.Show("Error: There is no USMT Path defined.");
return;
}
// Test that Windows folder is correct when offline
/* if (Windows_Path == null)
{
MessageBox.Show("Error: There is no Windows Path to capture.");
return;
} */
// Runs the capture
System.Diagnostics.Process Scanstate = new System.Diagnostics.Process();
Scanstate.StartInfo.FileName = USMTPath_full;
Scanstate.StartInfo.Arguments = Argument_full;
Scanstate.StartInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;
Scanstate.StartInfo.WorkingDirectory = USMTPath_Work;
//Scanstate.StartInfo.UseShellExecute = false;
Scanstate.StartInfo.CreateNoWindow = true;
//Scanstate.StartInfo.RedirectStandardOutput = true;
Scanstate.Start();
Scanstate.WaitForExit();
String Str_ExitCode = Scanstate.ExitCode.ToString();
if (Scanstate.ExitCode == 1)
MessageBox.Show("Error: Data has not been captured. Please check the log files for details.");
if (Scanstate.ExitCode == 0)
MessageBox.Show("Success: Data has been captured. For more information, check log files.");
else
{
MessageBox.Show("Error: Unknown error has occurred. Please check the log files for details.");
MessageBox.Show("Error Code: " + Str_ExitCode);
}
Scanstate.Close();
}
Basically, I am trying to run the process scanstate.exe. Now, I am trying to run backgroundworker in order to be able to retrieve progress and pass it to the progressbar.
private void btnCapture_Click(object sender, EventArgs e)
{
progressBar1.Minimum = 0;
progressBar1.Maximum = 100;
progressBar1.Step = 1;
BackgroundWorker CaptureBG = new BackgroundWorker();
CaptureBG.WorkerReportsProgress = true;
CaptureBG.DoWork += new DoWorkEventHandler(MethodToProcess);
CaptureBG.RunWorkerCompleted +=new RunWorkerCompletedEventHandler(CaptureBG_RunWorkerCompleted);
CaptureBG.ProgressChanged += new ProgressChangedEventHandler(CaptureBG_ProgressChanged);
CaptureBG.RunWorkerAsync();
}
and
private void CaptureBG_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs args)
{
progressBar1.Value = 100;
}
private void CaptureBG_ProgressChanged(object sender, ProgressChangedEventArgs args)
{
progressBar1.Value++;
}
However I am either missunderstanding the use or I am missing something, since the process runs, but I don't get any progress on the progressbar. It only fills once the process finish.
What am I doing wrong? In general, how would a process report progress if I don't know exactly how long is going to take?
Thanks in advance
The BackgroundWorker is responsible for updating the progress as it gets further complete with its task.
There is no interaction between your process that you launch and your code that would provide progress of that process back to your code.
In order for this to work, two things have to happen:
You need to define a mechanism for your process to report progress to the BackgroundWorker.
The BackgroundWorker must update its own progress by calling the ReportProgress method so that the ProgressChanged event is fired.
The first step is the tricky one and depends on how scanstate.exe works. Does it do anything to give an indication of progress, such as write to the console? If so, you can redirect the console output and parse that output to determine or at least estimate progress.
UPDATE
Scanstate.exe provides the ability to write progress to a log, e.g.:
scanstate /i:migapp.xml /i:miguser.xml \\fileserver\migration\mystore /progress:prog.log /l:scanlog.log
You could use a FileWatcher in your BackgroundWorker to look for changes to the progress log and update progress accordingly.
We are running a reporting web application that allows the user to select a few fields and a crystal report is generated based off of the fields selected. The SQL that is generated for the most complex report will return the data in < 5 seconds, however it takes the report and average of 3 minutes to run, sometimes longer causing a time out. We are running VS2010. The reports are basically set up out of the box with no real manipulations or computations being done, just displaying the data in a nice format. Is there anything we can try to speed it up, pre-loading a dummy report to load the dlls, some hack to make crystal run faster, anything?
EDIT: Code Added to show the databinding
protected void Page_Load(object sender, EventArgs e)
{
if (!Page.IsPostBack)
{
string strFile = Server.MapPath(#"AwardStatus.rpt");
CrystalReportSource1.Report.FileName = strFile;
DataTable main = Main();
CrystalReportSource1.ReportDocument.SetDataSource(main);
CrystalReportViewer1.HasCrystalLogo = false;
CrystalReportSource1.ReportDocument.ExportToHttpResponse(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat, Response, false, "pmperformance");
}
}
private DataTable Main()
{
Guid guidOffice = Office;
CMS.Model.ReportsTableAdapters.ViewACTableAdapter rptAdapter = new CMS.Model.ReportsTableAdapters.ViewACTableAdapter();
Reports.ViewAwardedContractsDataTable main = new Reports.ViewAwardedContractsDataTable();
if (Office == new Guid())
{
IEnumerable<DataRow> data = rptAdapter.GetData().Where(d => UserPermissions.HasAccessToOrg(d.guidFromId, AuthenticatedUser.PersonID)).Select(d => d);
foreach (var row in data)
{
main.ImportRow(row);
}
}
else if (guidOffice != new Guid())
{
main = rptAdapter.GetDataByOffice(guidOffice);
}
else
{
main = new Reports.ViewACDataTable();
}
return main;
}
private Guid Office
{
get
{
string strOffice = Request.QueryString["Office"];
Guid guidOffice = BaseControl.ParseGuid(strOffice);
if (!UserPermissions.HasAccessToOrg(guidOffice, AuthenticatedUser.PersonID))
{
return Guid.Empty;
}
else
{
return guidOffice;
}
}
}
protected void CrystalReportSource1_DataBinding(object sender, EventArgs e)
{
//TODO
}
This may be a bit flippant, but possibly consider not using crystal reports... We had a fair bit of trouble with them recently (out of memory errors being one), and we've moved off to other options and are quite happy...
Here's what I would do:
Put clocks from the time you get the field choices from the user, all the way to when you display the report. See where your processing time is going up.
When you look at the clocks, there can be various situations:
If Crystal Reports is taking time to fill the report, check how you're filling it. If you're linking the report fields directly to your data table, CR is probably taking time looking up the data. I suggest creating a new table (t_rpt) with dynamic columns (Field1, Field2,..FieldN) and pointing your report template to that table. I don't know if you're already doing this.
If it's taking time for you to lookup the data itself, I suggest creating a view of your table. Even though a memory hog, this will make your lookup quick and you can delete the view once you're done.
If it's none of the above, let us know what your clocks show.
In terms of loading any large amount of data, you'll always want to use a stored procedure.
Outside of that, you WILL see a delay in the report running the first time the Crystal DLLs load. Yes, you can preload them as you mentioned and that will help some.