Saturday 29 November 2014

Welcome to my final slog for CSC165. It’s crazy how we are already at the end of the term. In this slog, I will be quickly going over weeks 11 and 12 and also briefly go over Big Theta. In weeks 11 and 12, halting, computability, and countability were the main topics we covered.

Going off last week’s slog, I talked about Big Oh and Big Omega which are the lower bound and upper bound. Big Theta is basically in between. Big Theta represents the average case. Where a function is both lower bounded and upper bounded by another function. In notation statement would be:

f Θ(g)
This means:

{f : + | C1 +, C2 +, B , n , n ≥ B C1g(n) ≤ f(n) ≤ C2g(n)}

Notice in the beginning of the statement where there is a natural number and positive real number notation. That means that we are passing in a natural number as the input and then getting a positive real number as the output for function f.

Now onto halting, computability, and countability. Computers usually solve problems with algorithms in a systematic way. Though there are problems that cannot be solved by any algorithms. Halt functions are non-computable.

In CSC165, we learn to decide whether a function is computable or not by using reductions.

Suppose we have a non-computable function f, we can take perhaps a well-defined function g which could be extended to build f.

g computable f computable

We say f reduces to g, and using the contrapositive:

f non-computable g non-computable

To prove that a function is computable, show that this function reduces to a function g that is computable.

To prove that a function is non-computable, show that a non-computable function f reduces to this function.

Let’s take the function initialized(f, v) where f is a function and v is a variable is checked to be initialized before its first use in f.

Then for a function f1(x), where it returns x+1 and prints a variable y after, initialized(f, v) will return True because y will never be used in f1.

For a function f2(x), where it returns x+y+1, initialized(f, v) will return False because y has not been initialized before being used in the function.

Now for a problem, lets prove that initialized(f, v) is non-computable. To do that we need to show that some non-computable function reduces to initialized(f, v). This non-computable function can be halt(f, n), where f is a function and n is a variable input. So we need to show that halt(f, n) reduces to initialized(f, v), in other words, implement halt(f, n) using initialized(f, v).

First we create the function halt(f, n) with initialized(f, v) inside the function as well as another function that initialized(f, v) can use for f. For this function, let’s say g, it will take an argument (x) and call the function f with argument n that was passed in with halt(f, n) and then call print (v). At the end of halt(f, n), we will return whether initialized(g, v) will return True or False.

If in the function g, if f(n) halts, then we get to print(v), but v has not been initialized anywhere in the code, so when we call “return not initialized(g, v)”, it will return True.

If in the function g, if f(n) doesn’t halt, then we never get to print(v) so when we call “return not initialized(g, v)” will return False. 

Every pair of (function, input) will never properly be matched up with the correct True/ False values. This comes down to counting infinite sets. So now the question is, how do we count infinite sets? When counting finite sets, we have to be wary that the elements are one-to-one and onto. The definition of one-to-one is:

x, y A, x ≠ y f(x) ≠ f(y)

And the definition of onto is:

y B, x A, f(x) = y

A set is countable if and only if it can be defined as a list. For example:
n f(n)
0 → 0
1 → 2
2 → 4
3 → 6
4 → 8

Each item in f(n) corresponds to some element of ℕ.

We also talked a little about simple induction, which is actually something we would do in CSC236. Let’s say we have a function P, we know that P(0) is true, and we want to show that the truth transfers over to P(1), P(2), … P(n). In notation it would be,

n , P(n) P(n+1)

Then we believe that P is true for all natural numbers. 

 n  , P(n)

But not all functions will start with a base case where P(0) will be true. Instead some functions will start later for example P(0), P(1), and P(2) are false, but then P(3), P(4), P(5), and onward are all true. So then there are two cases we need to consider.

Suppose P(n) is a true predicate of the ℕ. If P
     1) Starts out true at P(0) or starts at P(k) where k  
     2) Truth of P transfers from each number to the next

And that is what we covered in the last few weeks of CSC165. I'm still practicing Big Oh, Big Omega, Big Theta, and halting problems. And I hope that I will be able to do well in Assignment 3. CSC165 has been very interesting and all computer science students who are going to take it next term and years later should definitely pay attention in this course. Thanks for reading!

Wednesday 19 November 2014

Hello and welcome to this week’s slog. This week will cover over week 9 and 10. Also, due to CDF being down this weekend, this week’s slog was a little delayed. In this slog I will talk about what has during these two weeks which mainly was about Big Oh and Big Omega. In my previous slog, I mentioned that I didn’t really understand Big Oh but after doing some examples in lectures, I sort of get how to do the proofs.

Basically from what I understand, when we want Big Oh, we are looking for the upper bound on the worst case running time for an algorithm for all inputs of any size. When we want to find if a function is in Big Oh of another function, in symbolic notation it would be:

f O(g)

What we are really saying, is:
C +, B , n , n ≥ B f(n) ≤ c(g(n))

What this means is that there exists a positive real number C, a natural number B, for every natural number n, such that if n is greater than or equal to B, then the function f, will be less than or equal to function g, multiplied by c. So since we are looking for the upper bound, we want to prove that the function f will be at most the function g or in other words, f is no worse than g in the worst case.

For Big Omega, it is slightly different where we are looking for the lower bound. When we want to find if a function is in Big Omega of another function, in symbolic notation it would be:
f Ω(g)

What we are really saying is:
C +, B , n , n ≥ B f(n) ≥ c(g(n))

Now the only difference between Big Oh and Big Omega is that the function f is at least as big as the function g or in other words, f is as bad as g in the worst case.

Okay, so let’s actually do an example.

Prove that 5n4 – 3n2 + 1 O(6n5 – 4n3 + 2n)

So this statement in symbolic notation would be:

C +, B , n , n ≥ B 5n4 – 3n2 + 1 ≤ c(6n5 – 4n3 + 2n)

So starting off, we need to pick a C and a B, but we don’t know what to pick yet, so we will have to do some work to figure it out. Also, how do we go about showing that the consequent is true? Let’s do some scratch work.

Let’s start by working on the left side

5n4 – 3n2 + 1 ≤ 5n4 + 1 # The sum is definitely greater than or equal to the difference
                     ≤ 5n4 + n4  # n ≥ 1, basically rewrite 1 as n4
                     ≤ 6n4 # add the two terms together
                     ≤ n . n4 # Now if we set n ≥ 6, we can pick B to be 6
                     ≤ n5 # Add the exponents

Now, what? Well, I think that’s the most we can do for the left side, how about do some work the right side?

6n5 – 4n3 + 2n ≥ 6n5 – 4n3  #Just like for the left side, having a sum on one side will be                                                     # greater than or equal to just the difference of two terms
                       ≥ 6n5 – 4n5 # - n5 ≤ - n3
                       ≥ 2n5 # Subtract the two terms
                       ≥ n5 # Drop the 2, as we know that it will be greater than n5 by itself

Okay now we can piece these two together and start to write our proof

Pick C = __1__ then C +
Pick B = __6__ then B
Assume n and n ≥ B # Universal quantifier assumption and antecedent
            Then 5n4 – 3n2 + 1 ≤ 5n4 + 1
                                           ≤ 5n4 + n4  # n ≥ 1
                                           ≤ 6n4
                                           ≤ n . n4 # n ≥ 6, so we can pick B to be 6
                                           ≤ n5
                                           ≤ 2n5
                                           ≤ 6n5 – 4n5
                                           ≤ 6n5 – 4n3 # - n5 ≤ - n3
                                           ≤ 6n5 – 4n3 + 2n # We can simply just chose C to be 1
Then n , n ≥ B 5n4 – 3n2 + 1 ≤ c(6n5 – 4n3 + 2n) #Introduce and
Then C +, B , n , n ≥ B 5n4 – 3n2 + 1 ≤ c(6n5 – 4n3 + 2n) #Introduce
Then 5n4 – 3n2 + 1 O(6n5 – 4n3 + 2n) #Conclude proof
The reason I chose this proof, which is a question from Tutorial #8, is because I was stuck at the part where we can’t do anything more to the left side of the consequent. I did not realize that we had to work backwards on the right side to get the two functions to match up. I decided to make this as an example to help explain Big Oh and also how to prove something in Big Oh.  

To show a proof where a function is not in Big Oh of another function, the symbolic notation would be:
f O(g)
Which says:
C +, B , n , n ≥ B f(n) > c(g(n))

This time, we assume that under every positive real number C, and every natural number B, there exists a natural number n, such that n is greater than or equal to B and the function f is greater than the function g multiplied by c. This means that the function f is not bounded above by g and that the function f is worse than the function g.

For a function not in Big Omega of another function, the symbolic notation would be:

f Ω (g)
Which says:
C +, B , n , n ≥ B f(n) < c(g(n))

So the function f is not bounded below by g and that the function f is better than the function g.


This is so far my understanding of Big Oh and Big Omega, as we get closer to the exam date, I hope to understand more of how do the proofs and the logic behind them as well. I hope by practicing through doing the questions on Assignment 3, I will be able to figure out how do them. Thanks for reading.

Sunday 2 November 2014

Hello and welcome! This slog will cover weeks 7-8. In week 7 we mainly focused on proofs by cases, interference and sorting algorithms. In week 8, we mainly focused on worst case, counting steps, and big-Oh.

So in week 7, the material covered was very familiar as it was just going over some of the things we have already been doing. Proofs by cases were nothing too new and was very easy to follow. For these proofs, the idea is that if you’re given a statement to prove, but you have to two or perhaps more parts to the statement, then you would organize the proof into sub proofs or cases. For example, the statement, “For every natural number n, the sum of n2 and n is even”, well natural numbers can be even or odd so we would split the proof into two cases where the first case is assuming n is an even number and the second case would be to assume that n is and odd number.

Next we talked about interference in our proofs like implication, universal, and existential introduction, as well as conjunction and disjunction introduction. These are the things that we already have used in our proofs where we either assume what we’re given or pick something that belongs in the specified domain. These “rules” have already been applied into our proofs that we've been doing so I won’t get into that much detail with them.

Lastly in week 7, we started talking about sorting algorithms. We talked about insertion sort, selection sort, and other sorts just as the Tim sort and merge sort. Basically the idea is which sorting method is the best for sorting out some cards in your hands. Well it actually depends on what cards you have in your hand, for example, numbers, colours, and suits and what how you want to sort them. I myself am not too familiar with each sorting method but it’s basically comparing perhaps one card in your hand with the card on the right or left of it and seeing which one is greater or smaller and then rearrange that card into the order that you want.

Then in week 8, we talked about counting steps with those sorting methods. Really counting the number of steps to sort something is really what we want to know. We talked about a python program where there were while loops, and we basically had to count the number of steps the program operated. For example,


When this function is called,  the first line runs only once, the while loop will run until the parameter is false, but then will run once more to jump out of the loop. The lines of code that are inside the while loop will run as many times as the while loop. Though the return statement will only operate when the If statement is true, which means it only runs once. After the whole while loop executes, the final return statement will run once as well. If the first return statement in the while loop executes, then the bottom return statement will be skipped.

In 165, we mainly focus on the worst case. I’m not too sure what worse case is and I’m still trying to wrap my head around big-Oh. I hope by next week I’ll figure it out through tutorials. We also talked about Upper Bound and Lower Bound where the worst case for the upper bound will be no worse than it and the worst case for the lower bound will be at least as bad as it. I really wish I could explain further into big-Oh but again I still don’t really understand the concept yet so by next slog I will try to talk more about big-Oh and the proofs for upper and lower bound.


Thanks for reading!