Need help on homework about tracert in windows [closed] - networking

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
This is the question
Select three random companies, and issue the whois and traceroute (tracert in Windows) commands for each one. Tracert is available from a command prompt. To use whois, you will need to search for an online tool. Then write a short paragraph about each utility outlining the kinds of information available from each. Copy & Paste screen shots for each utility and each company to back up the reported findings.
Assuming i am a noob. I would be glad if someone would outline how to tackle this question for my homework assignment.

This is what I make of this question
Tracert
Read this article for information on trace routes
Company 1 - BBC
Run a tracert on bbc.co.uk
C:\>tracert bbc.co.uk
Using the information from the tracert documentation will help you document your findings. Then follow the same methodology for the next two companies.
Who is
There is a webisite Whois Lookup
Just put in bbc.co.uk or whatever.
The first paragraph here outlines the information you can get from it. Also looking up a domain will show you what information this will provide.
The information on the websites will help with:
Then write a short paragraph about each utility outlining the kinds of information available from each.
You will then need to take screenshots.
The question is very straight forward, add a sprinkling of common sense and you have yourself an answer.

Related

What is abcxyzarchive.goog? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I know that abcxyzarchive.goog is a google website because .goog is a google-reserved domain,
but what's it for? I know this question may be off-topic but I can't find another website to put it on?
I guess that Google is just using this domain ending for their own microservices, for example, if you use Google Translate for tagesschau.de you'll get the following URL:
https://www-tagesschau-de.translate.goog/
The English Wikipedia article states:
Google also owns the top-level domains goog (for sites such as
partneradvantage.goog and pki.goog), gle (for shortened URLs such
as goo.gle and forms.gle) and youtube (for sites such as
about.youtube and blog.youtube).

Struggling to set up "network" drives on home p.c. to mimic work environment for software development [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I want to mimic my work computer so I can develop with reference to my network drives for my Windows 10 computer at home.
I want S:\ drive to point to some local drive on my computer.
I am following directions to the letter when attempting to create homegroup for windows 10.
When I type HomeGroup under search, I don't see any option, as shown in article below.
Any ideas?
https://support.microsoft.com/en-us/help/17145/windows-homegroup-from-start-to-finish
folks, this is a good question, and I struggled with it.
i don't appreciate the negative points.
here is the answer.
To do what I need to do, you have to go to file explorer, and under sharing tab, i was able to share with everyone (the folder).
Then, in my mappping, i have to refer to \\ComputerName\ShareName.
They keep changing the way things work from one windows to the next....
simple.
Set up drive partitions.
Much simpler and more reliable...

Performance reading end of large file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have need to implement something similar to tail -f to read new lines added to a log file and handle the log file rolling over. This is for Solaris 10. Currently, the application checks the status of the file every second and if the file has changed, it opens the file, seeks to near the end and reads from there to the end of the file.
That all seems to work fine, but I'm curious what the performance impacts would be when the log file is very large. Does seek actually need to read through the whole file, or is it smart enough to only load the end of the file?
lseek is fast in general use, even for huge files.
See more in the man page.
Depending on special circumstances, it might slow down, but I've never seen those IRL.
Man page: http://www.unix.com/man-page/opensolaris/2/lseek/

Legality of Mining Crowdsourced Data [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a project idea for which I want to mine publicly available data on another website that it received by crowd-sourcing. This is so I have initial data for my own project. To reiterate, I want to write a robot to grab data that is displayed on another website and use it for my own website. Does anyone know the legality of this sort of thing? Does the original website own the data that was given to it by a crowd? Even if so, can I use it?
Web scraping is a legally complicated issue.
The hassles of legal action and enforceability often keep scrapers from getting in trouble.
Outright duplication is considered actionable, although courts have ruled that "duplication of facts" is permitted (US).
I advise you read up here: http://en.wikipedia.org/wiki/Web_scraping#Legal_issues
Best,
legally, you should be fine. as long as the data is made available and the people have consented; you aren't hacking and the other site has permission to share. check for a license on the other site, if there isn't one inquire or be prepared for access to be denied at some point. and even though it is publicly available doesn't mean the other site wants it to be.
also, double check and make sure that you don't inadvertently publish private data as well.

Google Analytics seeming impossible results [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm a Google Analytics newbie using it to satisfy my curiosity as to where my very-low-traffic website is being looked at from and what pages they're looking at. Frequently I see results that seem impossible given my understanding of what I think I'm looking at. What I do each morning is check to see from which cities it has been viewed, then look at which pages have been looked at from each city. This morning's results illustrate the supposed problem. The website was looked at from Eugene, Oregon and from Vienna, Austria. Both showed the exact same pages looked at for the exact same amount of time. That this should happen without their usage being linked would seem impossible. Somehow it has to be the same usage, but I can't figure out how they would be linked. Can anyone enlighten me? I Googled for awhile trying to find the answe but without success.
you site is likely being visited by automated scripts running out of those 2 cities(or by proxy). THey are likely spammer scripts or scripts designed to find vulnerabilities in your sites security.

Resources