Why does it not print in the file (PrintWriter) - printwriter

I am trying to separate certain items in a file into a different file. When I use errorgrades.println it does not work, but when I print it into the console, it does. What am I doing wrong?
PrintWriter errorgrade = null;
try{
Scanner in = new Scanner(System.in);
System.out.println("Enter name of input file:");
String input = in.nextLine();
File ErrorGradesFile = new File("src\\ErrorGradesFile.txt");
File FinalGradesFile = new File("src\\FinalGradesFile.txt");
File RawGradesFile = new File("src\\"+input);
Scanner in2 = new Scanner(RawGradesFile);
finalgrade = new PrintWriter(FinalGradesFile);
errorgrade = new PrintWriter(ErrorGradesFile);
String line = "";
while(in2.hasNextLine()){
String student = in2.nextLine();
String[] one = student.split(",");
if(one[1].equals(" ")){
System.out.println(student);
errorgrade.println(student);
}
else {
finalgrade.println("helo");
}
finalgrade.close();
errorgrade.close();
}

Related

how do read and write 15G txt file with 50 million record in asp core 6?

I want to read a 50milion record from 15G txt file and write in to elastic search
if (file.Length > 0)
{
string wwroot = _he.WebRootPath;
string contentpath = _he.ContentRootPath;
string path = Path.Combine(wwroot, "file/" + foldername);
if (!Directory.Exists(path))
{
var rcheck = Directory.CreateDirectory(path);
}
var filename = file.FileName;
var filepath = Path.Combine(path, filename);
if (filepath.Any())
{
using (FileStream stream = new FileStream(Path.Combine(path, filename), FileMode.Create))
{
file.CopyTo(stream);
}
}
string[] lines = System.IO.File.ReadAllLines(filepath);
var Plist = new List<Person>();
int i = 0;
foreach (var line in lines)
{
var newperson = new Person();
string[] sub = line.Split(":");
newperson.PId = sub[1];
newperson.FirstName = sub[2];
newperson.LastName = sub[3];
newperson.Gender = sub[4];
Plist.Add(newperson);
}
return View();
I can read and upload file but when in want to add to list I get error and only read 16000 items and my application is shutdown.
You need to read the file using a buffer. With a proper reading logic based on a buffer, you'll be able to read a file of any size.
This line here:
System.IO.File.ReadAllLines(filepath);
Reads ALL the content of 15 GB file at once, and attempts to put it all into memory. I don't know how your code managed to get past that line without throwing an OutOfMemoryException (reading "only" 4.62 GB file ate 19.2 GB of my memory when debugging).
Instead, use a buffer of a single line:
using var streamReader = File.OpenText(bigFilePath);
var fileLine = string.Empty;
while ((fileLine = streamReader.ReadLine()) != null)
{
// Your string line reading logic.
}
You will most probably not be able to keep all the records in the memory (depending on memory available), also sending them one by one to Elasticsearch would be an opposite of efficiency... so, you'll need to find a middle ground between those limitations. I would suggest batching, that is, sending records in a fixed-size groups. The size is for you to pick, but note that it shouldn't be super large or minimal, otherwise the benefits of using batching will be smaller.
Full code:
static void Main()
{
string wwroot = _he.WebRootPath;
string contentpath = _he.ContentRootPath;
string path = Path.Combine(wwroot, "file/" + foldername);
var peopleListBatch = new List<Person>();
const int BatchSize = 1024;
using var streamReader = File.OpenText(path);
var fileLine = string.Empty;
while ((fileLine = streamReader.ReadLine()) != null)
{
var lineParts = fileLine.Split(":");
var newperson = new Person
{
PId = lineParts[1],
FirstName = lineParts[2],
LastName = lineParts[3],
Gender = lineParts[4],
};
peopleListBatch.Add(newperson);
// Add to Elastic, but only when batch is full.
if (peopleListBatch.Count == BatchSize)
{
AddPersonsToElasticSearch(peopleListBatch);
peopleListBatch.Clear();
}
}
// Add remaining people, if any.
if (peopleListBatch.Count > 0)
{
AddPersonsToElasticSearch(peopleListBatch);
peopleListBatch.Clear();
}
}
Inserting to Elasticsearch is another story, and I leave that task to you:
static void AddPersonsToElasticSearch(List<Person> people)
{
// TODO: Add your inserting logic here.
}

ClosedXML how to freeze rows and columns when i export file

I have implemented the code below, all headers and data are added without any problem. If he wants to block the possibility of editing individual fields or columns in the excel file that is downloaded by the user, then there is a problem, because nothing is blocked
i use for freeze columns/rows, but when i export file and i open file i can edit any fields
worksheet.SheetView.Freeze(1,3);
[HttpGet]
public IActionResult ExportAsExcel()
{
IEnumerable<Employee> employees = this.repo.GetAll<Employee>();
List<EmployeeDTO> employeeDTO = this._mapper.Map<List<EmployeeDTO>>(employees);
using (var workbook = new XLWorkbook())
{
var woorksheet = workbook.Worksheets.Add("Sheet1");
var currentRow = 1;
woorksheet.Cell(currentRow, 1).Value = "ID";
woorksheet.Cell(currentRow, 2).Value = "name";
foreach (var empDtos in employeeDTO)
{
currentRow++;
woorksheet.Cell(currentRow, 1).Value = empDtos.EmployeeId;
woorksheet.Cell(currentRow, 2).Value = empDtos.Name;
}
using (var stream = new MemoryStream())
{
workbook.SaveAs(stream);
var content = stream.ToArray();
return File(
content,
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"Employee.xlsx"
);
}
}
}

Why does R.NET work locally but stop working in IIS?

I have an ASP.NET MVC application that sends parameters to an R script, the R script then generates a file and puts it in a folder locally. The process works perfectly on my local machine through Visual Studio but goes in and out after I publish and use IIS. Below is how I am initializing R.NET and getting these values from my view via AJAX.
I also have placed both of these PATHs in my system environment variables.
If anyone knows why IIS only works right after I restart my OS and then stops working shortly after I would greatly appreciate it. Seems odd that I have no problems in Visual Studio that I face in IIS.
[HttpGet]
public JsonResult Index1(string modelTypes = null, string fileNames = null, string[] locations = null, string lowerLimits = null, string upperLimits = null, string areas = null, string productivities = null, string[] fobs = null)
{
#ViewBag.UserName = System.Environment.UserName;
string userName = #ViewBag.UserName;
//Declare field parameters
var strModelTypeValue = modelTypes;
string strFileName = fileNames;
var strLocationValue = locations;
var strLowerLimitValue = lowerLimits;
var strUpperLimitValue = upperLimits;
string strAreaValue = areas;
string strProductivityValue = productivities;
string strFobValue = fobs.ToString();
var libraryLocation = "C:/Users/" + userName + "/Documents/R/win-library/3.2";
string rPath = #"C:\Program Files\R\R-3.3.2\bin\x64";
string rHome = #"C:\Program Files\R\R-3.3.2";
//Initialize REngine
REngine.SetEnvironmentVariables(rPath, rHome);
REngine engine = REngine.GetInstance();
engine.Initialize();
if (fobs.Length > 1)
{
strFobValue = string.Join(" ", fobs);
}
else
{
strFobValue = "ALL";
}
//Declare R Script path
var rScriptPath = "C:/Users/" + userName + "/Documents/R/RWDir/Loop_Optimization.R";
//Check to see if there was more than one location selected
if (strLocationValue.Length > 1)
{
foreach (var item in strLocationValue)
{
//Set string location to each location value in loop
var strlocation = item;
//Add values to parameter list
var myParams = new List<string>
{
strModelTypeValue,
strFileName,
strlocation,
strLowerLimitValue,
strUpperLimitValue,
strAreaValue,
strProductivityValue,
strFobValue,
libraryLocation
};
//Set myParams as arguments to be sent to r script
engine.SetCommandLineArguments(myParams.ToArray());
engine.Evaluate("source('" + rScriptPath + "')");
}
}
//Only one location specified, no need to loop
else
{
foreach (var item in strLocationValue)
{
//Set string location to each location value in loop
var strlocation = item;
var myParams = new List<string>
{
strModelTypeValue,
strFileName,
strlocation,
strLowerLimitValue,
strUpperLimitValue,
strAreaValue,
strProductivityValue,
strFobValue,
libraryLocation
};
engine.SetCommandLineArguments(myParams.ToArray());
engine.Evaluate("source('" + rScriptPath + "')");
}
}
//engine.Evaluate("source('" + rScriptPath + "')");
//engine.Dispose();
return Json("success", JsonRequestBehavior.AllowGet);
}
have you done any debugging by attaching process .

Object Reference Not Set to Instance of an Object in Google Cloud Loadobject

`loadconfig.SourceUris.Add(#"gs:\\planar-fulcrum-837\leadload-ip\01-
02-2013");`
Null object reference set to instance of an object
Here is the working sample for loading CSV file from cloud storage into Google Big Query.
Update variables such as "ServiceAccountEmail, KeyFileName, KeySecret, ProjectID, Dataset name and etc..
Add your table schema into this variable
TableSchema Schema = new TableSchema();
Here i am using single file loading, you can add N number of CSV file into this variable
System.Collections.Generic.IList<string> URIs = newSystem.Collections.Generic.List<string>();
URIs.Add(filePath);
Use this below code modify & work with it. Have a great day. (This solution i have found working more than 3 days).
using Google.Apis.Auth.OAuth2;
using System.IO;
using System.Threading;
using Google.Apis.Bigquery.v2;
using Google.Apis.Bigquery.v2.Data;
using System.Data;
using Google.Apis.Services;
using System;
using System.Security.Cryptography.X509Certificates;
namespace GoogleBigQuery
{
public class Class1
{
private static void Main()
{
try
{
String serviceAccountEmail = "SERVICE ACCOUNT EMAIL";
var certificate = new X509Certificate2(#"KEY FILE NAME & PATH", "KEY SECRET", X509KeyStorageFlags.Exportable);
// SYNTAX: var certificate=new X509Certificate2(KEY FILE PATH+NAME (Here it resides in Bin\Debug folder so only name is enough), SECRET KEY, X509KeyStorageFlags.Exportable);
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
Scopes = new[] { BigqueryService.Scope.Bigquery, BigqueryService.Scope.BigqueryInsertdata, BigqueryService.Scope.CloudPlatform, BigqueryService.Scope.DevstorageFullControl }
}.FromCertificate(certificate));
// Create and initialize the Bigquery service. Use the Project Name value
// from the New Project window for the ApplicationName variable.
BigqueryService Service = new BigqueryService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "APPLICATION NAME"
});
TableSchema Schema = new TableSchema();
TableFieldSchema F1 = new TableFieldSchema();
F1.Name = "COLUMN NAME";
F1.Type = "STRING";
F1.Mode = "REQUIRED";
TableFieldSchema F2 = new TableFieldSchema();
F1.Name = "COLUMN NAME";
F1.Type = "INTEGER";
F1.Mode = "NULLABLE";
//Add N number of fields as per your needs
System.Collections.Generic.IList<TableFieldSchema> FS = new System.Collections.Generic.List<TableFieldSchema>();
FS.Add(F1);
FS.Add(F2);
Schema.Fields = FS;
JobReference JR = JobUpload("PROJECT ID", "DATASET NAME", "TABLE NAME", #"gs://BUCKET NAME/FILENAME", Schema, "CREATE_IF_NEEDED", "WRITE_APPEND", '|', Service);
//SYNTAX JobReference JR = JobUpload(PROJECT ID, DATASET NAME, TABLE NAME, FULL PATH OF CSV FILE,FILENAME IN CLOUD STORAGE, TABLE SCHEMA, CREATE DISPOSITION, DELIMITER, BIGQUERY SERVICE);
while (true)
{
var PollJob = Service.Jobs.Get(JR.ProjectId, JR.JobId).Execute();
Console.WriteLine("Job status" + JR.JobId + ": " + PollJob.Status.State);
if (PollJob.Status.State.Equals("DONE"))
{
Console.WriteLine("JOB Completed");
Console.ReadLine();
return;
}
}
}
catch (Exception e)
{
Console.WriteLine("Error Occurred: " + e.Message);
}
Console.ReadLine();
}
public static JobReference JobUpload(string project, string dataset, string tableId, string filePath, TableSchema schema, string createDisposition, string writeDisposition, char delimiter, BigqueryService BigQueryService)
{
TableReference DestTable = new TableReference();
DestTable.ProjectId = project;
DestTable.DatasetId = dataset;
DestTable.TableId = tableId;
Job Job = new Job();
JobConfiguration Config = new JobConfiguration();
JobConfigurationLoad ConfigLoad = new JobConfigurationLoad();
ConfigLoad.Schema = schema;
ConfigLoad.DestinationTable = DestTable;
ConfigLoad.Encoding = "ISO-8859-1";
ConfigLoad.CreateDisposition = createDisposition;
ConfigLoad.WriteDisposition = writeDisposition;
ConfigLoad.FieldDelimiter = delimiter.ToString();
ConfigLoad.AllowJaggedRows = true;
ConfigLoad.SourceFormat = "CSV";
ConfigLoad.SkipLeadingRows = 1;
ConfigLoad.MaxBadRecords = 100000;
System.Collections.Generic.IList<string> URIs = new System.Collections.Generic.List<string>();
URIs.Add(filePath);
//You can add N number of CSV Files here
ConfigLoad.SourceUris = URIs;
Config.Load = ConfigLoad;
Job.Configuration = Config;
//set job reference (mainly job id)
JobReference JobRef = new JobReference();
Random r = new Random();
var JobNo = r.Next();
JobRef.JobId = "Job" + JobNo.ToString();
JobRef.ProjectId = project;
Job.JobReference = JobRef;
JobsResource.InsertRequest InsertMediaUpload = new JobsResource.InsertRequest(BigQueryService, Job, Job.JobReference.ProjectId);
var JobInfo = InsertMediaUpload.Execute();
return JobRef;
}
}
}

How to create Multiple worksheet in Excel?

I am creating Excel using DocumentFormat.OpenXml in ASP.Net.
Can anybody have idea how can create Multiple worksheet in Excel.
For ex. Sheet1, Sheet2, Sheet3...... sheetn
Try the following method:
/// <summary>
/// Add a blank worksheet to the workbook
/// </summary>
/// <param name="workbookPart">Wookbook part</param>
public static void InsertBlankWorksheet(WorkbookPart workbookPart)
{
// Add a blank WorksheetPart.
WorksheetPart newWorksheetPart = workbookPart.AddNewPart<WorksheetPart>();
// Create the new worksheet
Worksheet worksheet = new Worksheet();
worksheet.AddNamespaceDeclaration("r", "http://schemas.openxmlformats.org/officeDocument/2006/relationships");
SheetDimension sheetDimension1 = new SheetDimension() { Reference = "A1" };
SheetViews sheetViews1 = new SheetViews();
SheetView sheetView1 = new SheetView() { TabSelected = true, WorkbookViewId = (UInt32Value)0U };
sheetViews1.Append(sheetView1);
SheetFormatProperties sheetFormatProperties1 = new SheetFormatProperties() { DefaultRowHeight = 15D };
SheetData sheetData1 = new SheetData();
PageMargins pageMargins1 = new PageMargins() { Left = 0.7D, Right = 0.7D, Top = 0.75D, Bottom = 0.75D, Header = 0.3D, Footer = 0.3D };
PageSetup pageSetup1 = new PageSetup() { Orientation = OrientationValues.Portrait, Id = "rId1" };
worksheet.Append(sheetDimension1);
worksheet.Append(sheetViews1);
worksheet.Append(sheetFormatProperties1);
worksheet.Append(sheetData1);
worksheet.Append(pageMargins1);
worksheet.Append(pageSetup1);
newWorksheetPart.Worksheet = worksheet;
newWorksheetPart.Worksheet.Save();
Sheets sheets = workbookPart.Workbook.GetFirstChild<Sheets>();
string relationshipId = workbookPart.GetIdOfPart(newWorksheetPart);
// Get a unique ID for the new worksheet.
uint sheetId = 1;
if (sheets.Elements<Sheet>().Count() > 0)
{
sheetId = sheets.Elements<Sheet>().Select(s => s.SheetId.Value).Max() + 1;
}
// Give the new worksheet a name.
string sheetName = "Sheet" + sheetId;
// Append the new worksheet and associate it with the workbook.
Sheet sheet = new Sheet() { Id = relationshipId, SheetId = sheetId, Name = sheetName };
sheets.Append(sheet);
workbookPart.Workbook.Save();
}
EDIT
Here is the class that contains the method:
public static class ExcelHelpers
{
public static void InsertBlankWorksheet(WorkbookPart workbookPart)
{...}
}
Open up your excel document like this and call the method:
public static void Export(string document)
{
using (SpreadsheetDocument doc = SpreadsheetDocument.Open(document, true))
{
ExcelHelpers.InsertBlankWorksheet(doc.WorkbookPart);
}
}

Resources