I am using Power Automate I would like to represent two tables (one to many) like student and his courses in HTML file in below:
Studene No: 44444 Studene name: xxxxxxx
CourseID Coures
1010 Algorithm
4838 Data structure
3433 C++
What kind of flow should I use (Datarevers and what)?
Related
What is the logic of implementing truth table that do bitwise and of two inputs each is 4 bits or how many functions will be output i just need one example please .
You can use an online Python coding playground to find the answers to your truth table.
Here is a link for one: Python Online
Inside this you can play around with your inputs and then 'execute it' to see the output.
Here are two examples for you to type in. After you type this in, click 'execute' and wait for the results to appear on the right.
inputA = 0000
inputB = 0000
print(inputA & inputB)
>>> 0
inputA2 = 1110
inputB2 = 1111
print(inputA2 & inputB2)
>>> 1110
You can try to think of a more efficient way to find these values but for now you have a brute-force way to get all of the truth table values by manually changing them.
You can find other bitwise operations for Python here: Bitwise Operations
To answer the second part, if you have two 4 bit inputs, you can have up to 256 outputs (2^4 * 2^4)
I am trying to import an XML document and convert it to a dataframe in R. Usually the following code works fine:
xmlfile <- xmlTreeParse(file.choose()) ; topxml <- xmlRoot(xmlfile) ;
topxml2 <- xmlSApply(topxml, function(x) xmlSApply(x, xmlValue))
psycinfo <- data.frame(t(topxml2), row.names=NULL, stringsAsFactors=FALSE)
However, when I try this i get a dataframe with one row and 22570 columns (which is the number of rows that ideally want so that each record has its own row with multiple columns.
I've attached a snippet of what my XML data looks like for the first two records, which should be on separate rows.
<records>
<rec resultID="1">
<header shortDbName="psyh" longDbName="PsycINFO" uiTerm="2016-10230-001">
<controlInfo>
<bkinfo>
<btl>Reducing conservativeness of stabilization conditions for switched ts fuzzy systems.</btl>
<aug />
</bkinfo>
<chapinfo />
<revinfo />
<jinfo>
<jtl>Neurocomputing: An International Journal</jtl>
<issn type="Print">09252312</issn>
</jinfo>
<pubinfo>
<dt year="2016" month="02" day="16">20160216</dt>
</pubinfo>
<artinfo>
<ui type="doi">10.1016/j.neucom.2016.01.067</ui>
<tig>
<atl>Reducing conservativeness of stabilization conditions for switched ts fuzzy systems.</atl>
</tig>
<aug>
<au>Jaballi, Ahmed</au>
<au>Hajjaji, Ahmed El</au>
<au>Sakly, Anis</au>
</aug>
<sug>
<subj type="unclass">No terms assigned</subj>
</sug>
<ab>In this paper, less conservative sufficient conditions for the existence of switching laws for stabilizing switched TS fuzzy systems via a fuzzy Lyapunov function (FLF) and estimates the basin of attraction are proposed. The conditions are found by exploring properties of the membership functions and are formulated in terms of linear matrix inequalities (LMIs), which can be solved very efficiently using the convex optimization techniques. Finally, the effectiveness and the reduced conservatism of the proposed results are shown through two numerical examples. (PsycINFO Database Record (c) 2016 APA, all rights reserved)</ab>
<pubtype>Journal</pubtype>
<pubtype>Peer Reviewed Journal</pubtype>
</artinfo>
<language>English</language>
</controlInfo>
<displayInfo>
<pLink>
<url>http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2016-10230-001&site=ehost-live&scope=site</url>
</pLink>
</displayInfo>
</header>
</rec>
<rec resultID="2">
<header shortDbName="psyh" longDbName="PsycINFO" uiTerm="2016-08643-001">
<controlInfo>
<bkinfo>
<btl>Self–other relations in biodiversity conservation in the community: Representational processes and adjustment to new actions.</btl>
<aug />
</bkinfo>
<chapinfo />
<revinfo />
<jinfo>
<jtl>Journal of Community & Applied Social Psychology</jtl>
<issn type="Print">10529284</issn>
<issn type="Electronic">10991298</issn>
</jinfo>
<pubinfo>
<dt year="2016" month="02" day="15">20160215</dt>
</pubinfo>
<artinfo>
<ui type="doi">10.1002/casp.2267</ui>
<tig>
<atl>Self–other relations in biodiversity conservation in the community: Representational processes and adjustment to new actions.</atl>
</tig>
<aug>
<au>Mouro, Carla</au>
<au>Castro, Paula</au>
</aug>
<sug>
<subj type="unclass">No terms assigned</subj>
</sug>
<ab>This research explores the simultaneous role of two Self–Other relations in the elaboration of representations at the micro†and ontogenetic levels, assuming that it can result in acceptance and/or resistance to new laws. Drawing on the Theory of Social Representations, it concretely looks at how individuals elaborate new representations relevant for biodiversity conservation in the context of their relations with their local community (an interactional Other) and with the legal/reified sphere (an institutional Other). This is explored in two studies in Portuguese Natura 2000 sites where a conservation project calls residents to protect an atâ€risk species. Study 1 shows that (i) agreement with the institutional Other (the laws) and metaâ€representations of the interactional Other (the community) as approving of conservation independently help explain (at the ontogenetic level) internalisation of conservation goals and willingness to act; (ii) the same metaâ€representations operating at the microâ€genetic level attenuate the negative relation between ambivalence and willingness to act. Study 2 shows that a metaâ€representation of the interactional Other as showing no clear position regarding conservation increases ambivalence. Findings demonstrate the necessarily social nature of representational processes and the importance of considering them at more than one level for understanding responses to new policy/legal proposals. Copyright © 2016 John Wiley & Sons, Ltd. (PsycINFO Database Record (c) 2016 APA, all rights reserved)</ab>
<pubtype>Journal</pubtype>
<pubtype>Peer Reviewed Journal</pubtype>
</artinfo>
<language>English</language>
</controlInfo>
<displayInfo>
<pLink>
<url>http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2016-08643-001&site=ehost-live&scope=site</url>
</pLink>
</displayInfo>
</header>
</rec>
Does anyone know how to replicate the (pg_trgm) postgres trigram similarity score from the similarity(text, text) function in R? I am using the stringdist package and would rather use R to calculate these on a matrix of text strings in a .csv file than run a bunch of postgresql quires.
Running similarity(string1, string2) in postgres give me a number score between 0 and 1.
I tired using the stringdist package to get a score but I think I still need to divide the code below by something.
stringdist(string1, string2, method="qgram",q = 3 )
Is there a way to replicate the pg_trgm score with the stringdist package or another way to do this in R?
An example would be getting the similarity score between the description of a book and the description of a genre like science fiction. For example, if I have two book descriptions and the using the similarity score of
book 1 = "Area X has been cut off from the rest of the continent for decades. Nature has reclaimed the last vestiges of human civilization. The first expedition returned with reports of a pristine, Edenic landscape; the second expedition ended in mass suicide, the third expedition in a hail of gunfire as its members turned on one another. The members of the eleventh expedition returned as shadows of their former selves, and within weeks, all had died of cancer. In Annihilation, the first volume of Jeff VanderMeer's Southern Reach trilogy, we join the twelfth expedition.
The group is made up of four women: an anthropologist; a surveyor; a psychologist, the de facto leader; and our narrator, a biologist. Their mission is to map the terrain, record all observations of their surroundings and of one anotioner, and, above all, avoid being contaminated by Area X itself.
They arrive expecting the unexpected, and Area X delivers—they discover a massive topographic anomaly and life forms that surpass understanding—but it’s the surprises that came across the border with them and the secrets the expedition members are keeping from one another that change everything."
book 2= "From Wall Street to Main Street, John Brooks, longtime contributor to the New Yorker, brings to life in vivid fashion twelve classic and timeless tales of corporate and financial life in America
What do the $350 million Ford Motor Company disaster known as the Edsel, the fast and incredible rise of Xerox, and the unbelievable scandals at GE and Texas Gulf Sulphur have in common? Each is an example of how an iconic company was defined by a particular moment of fame or notoriety; these notable and fascinating accounts are as relevant today to understanding the intricacies of corporate life as they were when the events happened.
Stories about Wall Street are infused with drama and adventure and reveal the machinations and volatile nature of the world of finance. John Brooks’s insightful reportage is so full of personality and critical detail that whether he is looking at the astounding market crash of 1962, the collapse of a well-known brokerage firm, or the bold attempt by American bankers to save the British pound, one gets the sense that history repeats itself.
Five additional stories on equally fascinating subjects round out this wonderful collection that will both entertain and inform readers . . . Business Adventures is truly financial journalism at its liveliest and best."
genre 1 = "Science fiction is a genre of fiction dealing with imaginative content such as futuristic settings, futuristic science and technology, space travel, time travel, faster than light travel, parallel universes, and extraterrestrial life. It often explores the potential consequences of scientific and other innovations, and has been called a "literature of ideas".[1] Authors commonly use science fiction as a framework to explore politics, identity, desire, morality, social structure, and other literary themes."
How can I get a similarity score for the description of each book against the description of the science fiction genre like pg_trgm using an R script?
How about something like this?
library(textcat)
?textcat_xdist
# Compute cross-distances between collections of n-gram profiles.
round(textcat_xdist(
list(
text1="hello there",
text2="why hello there",
text3="totally different"
),
method="cosine"),
3)
# text1 text2 text3
#text1 0.000 0.078 0.731
#text2 0.078 0.000 0.739
#text3 0.731 0.739 0.000
I do have a pairs "manager, worker" for some hierarchy structure.
What will be the total number of relations "manager, worker" including relations like "manager, worker of my worker - my worker", "worker of worker of my worker - my worker" and so on...
For example:
Alex, Pete
Pete, Kane
Jones, Alex
Clark, Allen
The total amount of connections is 7:
Jones -> Alex -> Pete -> Kane
Jones -> Pete
Jones -> Kane
Alex -> Kane
Clark -> Allen
I have to calculate the total amount of connections for about ~20k relations.
Is there any special methods for doing so?
i think there cant be a mathematical solution without analyzing the stucture.
and if you have analyzed it it is straight forward to count them.
here is a solution in delphi, but you can easily translate it in any other language. you only have to fill the dictionary with your source of the 20k pairs.
program Project1;
uses
Generics.Collections;
function relationcount(dict:TDictionary<string,string>):Integer;
function _relationcount(_key:string):Integer;
begin
if dict.ContainsKey(_key) then
result:=1+_relationcount(dict.Items[_key])
else
result:=0;
end;
var k:string;
begin
result:=0;
for k in dict.Keys do
result:=result+_relationcount(k);
end;
var
relations:TDictionary<string,string>;
begin
relations:=TDictionary<string,string>.Create;
// replace this with your read from file algorithm //
relations.Add('Alex', 'Pete'); //
relations.Add('Pete', 'Kane'); //
relations.Add('Jones', 'Alex'); //
relations.Add('Clark', 'Allen'); //
// ----------------------------------------------- //
writeln(relationcount(relations));
end.
Sure there's a solution. Essentially you're asking for the sum of the number of supervisors each person has. So you could do something as simple as
s = 0
for each worker w
for each worker v
if v is above w in the supervisory chain then s <- s+1
return s
Depending on how your data is stored and what language you are using there might be a more efficient way of doing this.
Before you say that this has already been asked, know that I've already reviewed these:
Is there a standard for storing normalized phone numbers in a database? - This is from 2008, and says that, at the time, there was no such standard. I'm hoping that something changed in the last 13 years.
How to validate phone numbers using regex - I already have the parse; it's quite easy: If it's not a digit, skip it. This question is not about the parser, but the format in which I save/display it. I am not worried about how hard it is to parse, but whether it's in standard format.
Say I'm working on a program that has to deal with phone numbers, and I want to make sure that they're saved and displayed in a standard format, so other programs and humans can also understand them predictably & consistently.
For instance, I've seen the following all be valid representations for the same US phone number:
1234567
123-4567
123 4567
5551234567
(555) 1234567
555-1234567
555 123 4567
555-123-4567
(555)-123-4567
(555) 123-4567
(5) 123 4567
1-555-123-4567
(1) 555-123-4567
+1 555-123-4567
+1 555 123-4567
+1 (555) 123-4567
Ad nauseum…
And then different countries represent numbers in different ways:
55 1234 567 8901
55 12 3456 7890
55 123 456 7890
55 1234 567890
555 123 456
(55) 123 4567
5.555.123-45-67
Ad nauseum…
As you can see, the number of ways a user can see a valid phone number is nearly infinite (The Wikipedia page for Telephone numbers in the UK is 26 printer pages long). I want all the numbers in my database and on the screen to be in a universally-recognizable format. As far as I can tell, ISO and ANSI have no defined format. Is there any standard notation for phone numbers?
There's no ISO standard, but there are ITU standards. You want E.123 and E.164.
In summary, any phone number is represented by +CC MMMMMM... where CC is the country code, and is one to three digits, and MMMMMM... is the area code (where applicable) and subscriber number. The total number of digits may not be more than 15. The + means "your local international dialling prefix".
I'll give a few examples using my own land line number.
So, for example, if you are in Germany, the number +44 2087712924 would be dialled as 00442087712924, and in the US you would dial it as 011442087712924. The 44 means that it is a UK number, and 2087712924 is the local part.
In practice, the long string of MMMMM... is normally broken up into smaller parts to make it easier to read. How you do that is country-specific. The example I give would normally be written +44 20 8771 2924.
As well as the unambiguous E.123 representation above, which you can use from anywhere in the world that allows international dialling, each country also has its own local method of representing numbers, and some have several. The example number will sometimes be written as 020 8771 2924 or (020) 8771 2924. The leading 0 is, strictly speaking, not part of the area code (that's 20) but a signal to the exchange meaning "here comes a number that could go outside the local area". Very occasionally the area code will be ommitted and the number will be written 8771 2924. All these local representations are ambiguous, as they could represent valid numbers in more than one country, or even valid numbers in more than one part of the same country. This means that you should always store a number with its country code, and ideally store it in E.123 notation. In particular you should note that phone numbers ARE NOT NUMBERS. A number like 05 is the same as 5. A phone number 05 is not the same as 5, and storage systems will strip leading zeroes from numbers. Store phone numbers as CHAR or VARCHAR in your database.
Finally, some oddities. The example number will be written by some stupid people as 0208 771 2924. This is diallable, but if you strip off the leading 0208 assuming that it is an area code, then the remainder is not valid as a local number. And some countries with broken phone systems [glares at North America] have utterly bonkers systems where in some places you must dial all 10 digits for a local call, some where you must not, some where you must dial 1NNN NNN NNNN, some where you must not include the leading one, and so on. In all such cases, storing the number as +CC MMMMM... is correct. It is up to someone actually making a call (or their dialling software) to figure out how to translate that into a dialable sequence of digits for their particular location.
There is a lot of local countries standards. On one of my projects I had the same problem. Solved as:
In DB everything stored as numbers: 123456789
Depending on selected web page language, this number pre-formatted when page loads.
Examples:
France, Luxemburg format phone numbers like 12.34.56.78.90 or 12 34
56 78 90
Germany: 123 45 67 or 123-45-67
Great Britain: 020 1234 1234 or 1234 01234 12345
Italy, Netherlands: 12 1234567 or 020-1234567
CIS countries: 123 123-45-67