aboutsummaryrefslogtreecommitdiff
path: root/sem7
diff options
context:
space:
mode:
Diffstat (limited to 'sem7')
-rw-r--r--sem7/db/lec5.org42
-rw-r--r--sem7/db/lec_xpath.md41
-rw-r--r--sem7/dist/lec_google/notes.md55
-rw-r--r--sem7/pp/larger.hs29
-rw-r--r--sem7/pp/lec6.hs8
-rw-r--r--sem7/pp/prolog/lec1.dl33
-rw-r--r--sem7/pp/prolog/lec3.pl110
-rw-r--r--sem7/pp/prolog/lec4.pl50
-rw-r--r--sem7/pp/prolog/notes_lec2.md22
-rw-r--r--sem7/pp/prolog/shell.nix13
10 files changed, 395 insertions, 8 deletions
diff --git a/sem7/db/lec5.org b/sem7/db/lec5.org
new file mode 100644
index 0000000..6696a50
--- /dev/null
+++ b/sem7/db/lec5.org
@@ -0,0 +1,42 @@
+* Opgave 8.3 fra bogen
+
+ #+begin_quote
+ Consider the parallel hash join algorithm in Sect. 8.4.1.2.
+ Explain what the build phase and probe phase are.
+ Is the algorithm symmetric with respect to its input relations?
+ #+end_quote
+
+** Explain build and probe phases
+
+ The build phase hashes the =R= phase into $p$ partitions.
+ Then it sends the partitions to their respective nodes.
+ The nodes in $[1, p]$ each receive the partitions and creates a local hash table for $R_j$.
+
+ In the probe phase all nodes do the same with =S= and
+ the nodes $[1, p]$ receives the partitions of =S= joins it with =R=.
+
+** Is the algorithm symmetric?
+
+ No it's not, be inner join(thus =R=) in the build phase, must be completely stored and hashed.
+ It can't start doing join on it while it's receiving.
+
+ A symmetric algorithms allows changing the order in which inputs are consumed.
+
+* Opgave 8.7 fra bogen
+
+ #+begin_quote
+ Consider a nine way join (ten relations are to be joined),
+ calculate the number of possible right-deep, left-deep, and bushy trees,
+ assuming that each relation can be joined with anyone else.
+ What do you conclude about parallel optimization?
+ #+end_quote
+
+ With right-deep we can join them in $10!$ different ways, as we just order them differently down the spine.
+ The same for left-deep.
+
+ For bushy trees we look at the leafs of the tree.
+ Here we can have $10!$ different orders again.
+ But we multiply this with the number of ways to create a 5 leaf binary tree, which i count as 3.
+
+ Therefore the end result is $2 * 10! + 3 * 10! = 5 * 10! = 18144000$.
+ Hmm that does not seem right.
diff --git a/sem7/db/lec_xpath.md b/sem7/db/lec_xpath.md
new file mode 100644
index 0000000..e28bd94
--- /dev/null
+++ b/sem7/db/lec_xpath.md
@@ -0,0 +1,41 @@
+
+Xpath is used for quering information in XML documents.
+Can also be used with postgresql databases.
+
+When using xpath, we query an abstact document structure.
+This is also represented by a nodetree.
+
+NOTE that `/, /something/else` in the slides are two different queries.
+
+# Nodetree
+
+**Document node** is the root of the tree, thus the whole element.
+Will often contain document metadata, and will point at a single root element.
+
+**Element** element represent nodes.
+
+# Location Step
+
+Stuff between two `/`'s.
+Looks like `${axis}::${node test}[${predicate_1}][${predicate_2}]`
+
+Axis will default to `child`.
+But there are many other, which also have abbreviations.
+
+# Funktioner og Expressions
+
+Man kan køre funktioner såsom concat.
+Og de kan bruge queries.
+
+```
+concat("hej", /med/dig/text())
+```
+
+# Spørgsmål
+
+Vil det her give root element flere gange.
+```
+//course/..
+```
+
+Nope det er sets, den vil ikke være der flere gange.
diff --git a/sem7/dist/lec_google/notes.md b/sem7/dist/lec_google/notes.md
new file mode 100644
index 0000000..a246d60
--- /dev/null
+++ b/sem7/dist/lec_google/notes.md
@@ -0,0 +1,55 @@
+# GFS Design
+
+Google workflow is optimized towards:
+
+ - Large sequential reads, with few random reads
+ - Frequent concurrent append, with few random writes.
+
+ We therefore have many writers (fx. web crawlers) with a single readers (fx. indexer).
+
+ - Sustained throughput is more important that latency.
+
+## Appending
+
+When crawlers append stuff to the file that the indexer reads, the location for appending does not matter.
+So this is given up to allow for many writers at the same time.
+
+## Chunking
+
+The large files are split into chunks, which are replicated over to other chunkservers.
+
+A *chunkhandle* is the index of a single chunk.
+Then the *chunkmaster* holds a dictionary which maps file names and offsets to chunkhandle and list of servers.
+The list of servers are the servers where the wanted chunk is replicated.
+
+## Replication
+
+**TODO: Read up on this.**
+
+Ways of writing data to replicas:
+ - *Passive replication*
+
+## Consistency
+
+File creation and changes (*namespace mutations*) is done by the master, thus they are atomic.
+
+A file region is consistent if
+ - *Consistent* if all clients receive the same data
+ - *Defined* if consistent and clients see written data in their entirety.
+ So clients will not see a partial write. (**TODO** Not entirely sure about this)
+
+Records are appended atomically *at least once somewhere*.
+
+## Master Fault Tolerance
+
+Master state is maintained in stable storage.
+
+Can have multiple (read-only) *shadow masters*, which provide read access.
+
+External program will detect master failure, and select a new master.
+
+# Chubby
+
+Locking service, that should be super reliable.
+
+Locks are kept for a very long time, hours or days (This has something to do with the word *coarse*).
diff --git a/sem7/pp/larger.hs b/sem7/pp/larger.hs
new file mode 100644
index 0000000..5618faf
--- /dev/null
+++ b/sem7/pp/larger.hs
@@ -0,0 +1,29 @@
+import Data.Maybe
+
+data LTree = LLeaf String | LNode LTree LTree
+data LTreeRoot = LTreeRoot LTree Float
+
+treemerge :: LTreeRoot -> LTreeRoot -> LTreeRoot
+treemerge (LTreeRoot ta la) (LTreeRoot tb lb) = LTreeRoot (LNode ta tb) (la + lb)
+
+data Bit = Zero | One
+
+instance Show Bit where
+ show Zero = "0"
+ show One = "1"
+
+createBitList :: String -> Maybe [Bit]
+createBitList [] = Just []
+createBitList (c:rest) | c == '0' = createBitList rest >>= Just . (:) Zero
+ | c == '1' = createBitList rest >>= Just . (:) One
+ | otherwise = Nothing
+
+maketable :: LTree -> [([Bit], String)]
+maketable = recur []
+ where recur prefix (LLeaf val) = [(reverse prefix, val)]
+ recur prefix (LNode left right) =
+ recur (Zero : prefix) left ++
+ recur (One : prefix) right
+
+
+testTree = LNode (LLeaf "Hej") (LNode (LLeaf "Med") (LLeaf "Dig"))
diff --git a/sem7/pp/lec6.hs b/sem7/pp/lec6.hs
index 4598d7a..31f666c 100644
--- a/sem7/pp/lec6.hs
+++ b/sem7/pp/lec6.hs
@@ -94,11 +94,3 @@ filterWithFold func = foldr (\new pre -> if func new
else pre
) []
-
--- Problems part of larger thing
-data LTreeMid = LLeaf Char | LNode LTreeMid LTreeMid
-data LTree = LTree LTreeMid Float
-
-treemerge :: LTree -> LTree -> LTree
-treemerge (LTree ta la) (LTree tb lb) = LTree (LNode ta tb) (la + lb)
-
diff --git a/sem7/pp/prolog/lec1.dl b/sem7/pp/prolog/lec1.dl
new file mode 100644
index 0000000..84154c2
--- /dev/null
+++ b/sem7/pp/prolog/lec1.dl
@@ -0,0 +1,33 @@
+#lang datalog
+
+% It finds whether a parent is a father or a mother depending on the gender of the parent.
+
+
+
+% Opgave 2
+
+% In the first program, a person is only happy if he/she is both rich and famous.
+% In the second program, a person is happy if he/she is either rich or famous.
+
+
+% Opgave 3
+
+% There is one program containing 3 clauses.
+% The first two clauses are called facts each only containing a single atom.
+% The first two facts each contain a atom with the single term beyonce which is a constant
+
+% Last line is a clause with a head and a body containing two atoms.
+% The body uses the predicates created in the last two lines, while the clause introduces the new predicate happy.
+% The clause uses the variable Person, which is used once in all 3 atoms.
+
+
+% Opgave 4
+
+destinct(red, green).
+destinct(green, blue).
+destinct(red, blue).
+
+% It is symmetric
+destinct(X, Y) :- destinct(Y, X).
+
+colouring(X, Y, XC, YC) :- neighbour(X, Y), destinct(XC, YC).
diff --git a/sem7/pp/prolog/lec3.pl b/sem7/pp/prolog/lec3.pl
new file mode 100644
index 0000000..d434134
--- /dev/null
+++ b/sem7/pp/prolog/lec3.pl
@@ -0,0 +1,110 @@
+
+/* Opgave 1
+
+{
+ loves(rose, jack).
+ loves(jack, rose).
+ loves(caledon, rose).
+ happy(rose).
+ happy(jack).
+}
+{
+ loves(rose, jack).
+ loves(caledon, rose).
+}
+{
+ loves(jack, rose).
+ loves(caledon, rose).
+}
+{
+ loves(rose, jack).
+ loves(jack, rose).
+}
+{
+ loves(rose, jack).
+ loves(jack, rose).
+ loves(caledon, rose).
+ happy(jack).
+}
+{
+ loves(rose, jack).
+ loves(jack, rose).
+ loves(caledon, rose).
+ happy(rose).
+}
+{
+ loves(rose, jack).
+ loves(jack, rose).
+ loves(caledon, rose).
+}
+{
+ loves(rose, jack).
+ loves(jack, rose).
+ happy(jack).
+}
+{
+ loves(rose, jack).
+ loves(jack, rose).
+ happy(rose).
+}
+{
+ loves(rose, jack).
+ loves(jack, rose).
+ happy(rose).
+ happy(jack)
+}
+
+Well okay i feel stupid
+
+We say that the universe U_p = {rose, jack, caledon}.
+We will then way that the base is:
+
+U_b = { loves(x, y) | x, y \in U_p } \cup { happy(x) | x \in U_p }
+
+Then all the interpretations are.
+
+I = { S | S \subseteq U_b }
+
+*/
+
+
+/* Opgave 2
+
+loves(rose, jack).
+happy(rose)
+
+happy(rose) <= loves(rose, jack),loves(jack,rose)
+We know rose is happy, we do not need to check the predicates.
+For some reason, kind of TODO.
+
+*/
+
+/* Opgave 3
+
+I_4 og I_5 er modeller for P, hvor I_4 lige har en extra happy(caledon).
+
+Her er I_4 minimal fordi ingen anden model for P er mindre.
+
+*/
+
+/* Opgave 4
+
+M_1 = T_P(Ø) = {god(odin),son(odin,thor),son(odin,baldr),son(thor,mothi),son(thor,magni)}
+M_2 = T_P(M_1) = M_1 \cup {demigod(thor),demigod(baldr)}
+M_3 = T_P(M_2) = M_2 \cup {mortal(mothi),mortal(magni)}
+
+*/
+
+/* Opgave 5
+
+B -+-> A
+D -+-> C
+B -+-> A
+C ---> A
+C -+-> B
+D ---> B
+
+D, C -> A, B
+D -> C -> B -> A
+
+Det er stratifyable.
diff --git a/sem7/pp/prolog/lec4.pl b/sem7/pp/prolog/lec4.pl
new file mode 100644
index 0000000..36d3880
--- /dev/null
+++ b/sem7/pp/prolog/lec4.pl
@@ -0,0 +1,50 @@
+
+last([X], X).
+last([_|XS], X) :- last(XS, X).
+
+/*
+last([1,2,3], X) # Bruger regel 2
+last([2,3], Y) # Bruger regel 2
+last([3], Z) # Bruger regel 1, så Z = 3
+# Så Y = Z = 3
+# Så X = Y = Z = 3
+
+
+Okay det er også bare fint at skrive
+last([1,2,3],X)
+ |
+last([2,3],X)
+ |
+last([3], X)
+*/
+
+attach([], E, [E]).
+attach([X|XS], E, [X|R]) :- attach(XS, E, R).
+
+
+/*
+Proof search of attach
+
+attach([1,2], a, L)
+ | X = 1, XS = [2], E = a, L = [1|L1]
+attach([2], a, L1)
+ | X1 = 2, XS1 = [], E1 = a, L1 = [2|L2]
+attach([], a, L2)
+ | E2 = a, L2 = [a]
+*/
+
+nat(zero).
+nat(succ(X)) :- nat(X).
+
+after(succ(X), X).
+
+leq(zero, Y) :- nat(Y).
+leq(succ(X), succ(Y)) :- leq(X, Y), nat(X), nat(Y).
+
+add(X, zero, X) :- nat(X).
+add(X, succ(Y), succ(R)) :- add(X, Y, R), nat(X), nat(Y), nat(R).
+
+sub(X, Y, R) :- add(R, Y, X).
+
+min(X, Y, X) :- leq(X, Y).
+min(succ(X), Y, Y) :- leq(Y, X).
diff --git a/sem7/pp/prolog/notes_lec2.md b/sem7/pp/prolog/notes_lec2.md
new file mode 100644
index 0000000..49511e8
--- /dev/null
+++ b/sem7/pp/prolog/notes_lec2.md
@@ -0,0 +1,22 @@
+# Logic Programming
+
+## Herbrand
+
+The *herbrand universe* denotes all the constants in the program.
+Thus all the things we can talk about.
+
+While the *herbrand base* is the set of possible facts.
+So all the things that one can build using predicates and constants.
+So any model is the subset of the *herbrand base*.
+Just because something is in the *herbrand base* does not mean that it holds.
+
+The *herbrand universe* is what we can talk about, while the *herbrand base* is what we can say about it.
+
+## Minimal Model
+
+A minimal model for some program P, does not contain a smaller Herbrand model for P.
+
+Compute finite model with *least fixed point*, which calculates all things that are true.
+Has something to do with *minimal model*, probably the saaaaaame??. TODO
+
+
diff --git a/sem7/pp/prolog/shell.nix b/sem7/pp/prolog/shell.nix
new file mode 100644
index 0000000..45b9e32
--- /dev/null
+++ b/sem7/pp/prolog/shell.nix
@@ -0,0 +1,13 @@
+{ pkgs ? import <nixpkgs> {} }:
+let
+ pkgs_old = import (builtins.fetchTarball {
+ name = "nixpkgs-racket";
+ url = "https://github.com/nixos/nixpkgs/archive/1bf0327ef6d7e0959f3250758b4d33b4066a732b.tar.gz";
+ sha256 = "1pz4xaimpb1y8xmqz9c8a2g1nsr77jc7nxi6m8v4ph8q1r3c7pz9";
+ }) {};
+in
+pkgs.mkShell {
+ buildInputs = with pkgs; [
+ pkgs_old.racket swiProlog
+ ];
+}