DAA- Slip test questions- answers-CSE



1.What is Space complexity?
When we design an algorithm to solve a problem, it needs some computer memory to complete its execution. For any algorithm, memory is required for the following purposes...
Memory required to store program instructions
Memory required to store constant values
Memory required to store variable values
And for few other things

Space complexity of an algorithm can be defined as follows...

Total amount of computer memory required by an algorithm to complete its execution is called as space complexity of that algorithm
Generally, when a program is under execution it uses the computer memory for THREE reasons. They are as follows...
Instruction Space: It is the amount of memory used to store compiled version of instructions.
Environmental Stack: It is the amount of memory used to store information of partially executed functions at the time of function call.
Data Space: It is the amount of memory used to store all the variables and constants.
Note:- When we want to perform analysis of an algorithm based on its Space complexity, we consider only Data Space and ignore Instruction Space as well as Environmental Stack. That means we calculate only the memory required to store Variables, Constants, Structures, etc.,

To calculate the space complexity, we must know the memory required to store different datatype values (according to the compiler). For example, the C Programming Language compiler requires the following...
2 bytes to store Integer value, 
4 bytes to store Floating Point value, 
1 byte to store Character value, 
6 (OR) 8 bytes to store double value
Example 1
Consider the following piece of code...

int square(int a)
{
return a*a;
}
In above piece of code, it requires 2 bytes of memory to store variable 'a' and another 2 bytes of memory is used for return value.  That means, totally it requires 4 bytes of memory to complete its execution. And this 4 bytes of memory is fixed for any input value of 'a'. This space complexity is said to be Constant Space Complexity
If any algorithm requires a fixed amount of space for all input values then that space complexity is said to be Constant Space Complexity
Example 2
Consider the following piece of code...
int sum(int A[], int n)
{
   int sum = 0, i;
   for(i = 0; i < n; i++)
      sum = sum + A[i];
   return sum;
}
In above piece of code it requires  'n*2' bytes of memory to store array variable 'a[]' 2 bytes of memory for integer parameter 'n' 4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each) 2 bytes of memory for return value.  That means, totally it requires '2n+8' bytes of memory to complete its execution. Here, the amount of memory depends on the input value of 'n'. This space complexity is said to be Linear Space Complexity.
If the amount of space required by an algorithm is increased with the increase of input value, then that space complexity is said to be Linear Space Complexity


What is Time complexity?
Every algorithm requires some amount of computer time to execute its instruction to perform the task. This computer time required is called time complexity.  Time complexity of an algorithm can be defined as follows...
The time complexity of an algorithm is the total amount of time required by an algorithm to complete its execution.
Generally, running time of an algorithm depends upon the following...
Whether it is running on Single processor machine or Multi processor machine.
Whether it is a 32 bit machine or 64 bit machine
Read and Write speed of the machine.
The time it takes to perform Arithmetic operations, logical operations, return value and assignment operations etc.,
Input data
Note: When we calculate time complexity of an algorithm, we consider only input data and ignore the remaining things, as they are machine dependent. We check only, how our program is behaving for the different input values to perform all the operations like Arithmetic, Logical, Return value and Assignment etc.,

Calculating Time Complexity of an algorithm based on the system configuration is a very difficult task because, the configuration changes from one system to another system. To solve this problem, we must assume a model machine with specific configuration. So that, we can able to calculate generalized time complexity according to that model machine.
To calculate time complexity of an algorithm, we need to define a model machine. Let us assume a machine with following configuration...
Single processor machine
32 bit Operating System machine
It performs sequential execution
It requires 1 unit of time for Arithmetic and Logical operations
It requires 1 unit of time for Assignment and Return value
It requires 1 unit of time for Read and Write operations
Now, we calculate the time complexity of following example code by using the above defined model machine...
Example 1
Consider the following piece of code...

int sum(int a, int b)
{
   return a+b;
}
In above sample code, it requires 1 unit of time to calculate a+b and 1 unit of time to return the value. That means, totally it takes 2 units of time to complete its execution. And it does not change based on the input values of a and b. That means for all input values, it requires same amount of time i.e. 2 units.
If any program requires fixed amount of time for all input values then its time complexity is said to be Constant Time Complexity.

Example 2
Consider the following piece of code...

int sum(int A[], int n)
{
   int sum = 0, i;
   for(i = 0; i < n; i++)
      sum = sum + A[i];
   return sum;
}
For the above code, time complexity can be calculated as follows...

In above calculation 
 Cost is the amount of computer time required for a single operation in each line.  Repeatation is the amount of computer time required by each operation for all its repeatations.  Total is the amount of computer time required by each operation to execute.   So above code requires '4n+4' Units of computer time to complete the task. Here the exact time is not fixed. And it changes based on the n value. If we increase the n value then the time required also increases linearly.   Totally it takes '4n+4' units of time to complete its execution and it is Linear Time Complexity.
If the amount of time required by an algorithm is increased with the increase of input value then that time complexity is said to be Linear Time Complexity

What is Asymptotic Notation?
Whenever we want to perform analysis of an algorithm, we need to calculate the complexity of that algorithm. But when we calculate complexity of an algorithm it does not provide exact amount of resource required. So instead of taking exact amount of resource we represent that complexity in a general form (Notation) which produces the basic nature of that algorithm. We use that general form (Notation) for analysis process.
Asymptotic notation of an algorithm is a mathematical representation of its complexity
Note:- In asymptotic notation, when we want to represent the complexity of an algorithm, we use only the most significant terms in the complexity of that algorithm and ignore least significant terms in the complexity of that algorithm (Here complexity may be Space Complexity or Time Complexity).

For example, consider the following time complex ities of two algorithms...
Algorihtm 1 : 5n2 + 2n + 1
Algorihtm 2 : 10n2 + 8n + 3
Generally, when we analyze an algorithm, we consider the time complexity for larger values of input data (i.e. 'n' value). In above two time complexities, for larger value of 'n' the term in algorithm 1 '2n + 1' has least significance than the term '5n2', and the term in algorithm 2 '8n + 3' has least significance than the term '10n2'.  Here for larger value of 'n' the value of most significant terms ( 5n2 and 10n2 ) is very larger than the value of least significant terms ( 2n + 1 and 8n + 3 ). So for larger value of 'n' we ignore the least significant terms to represent overall time required by an algorithm. In asymptotic notation, we use only the most significant terms to represent the time complexity of an algorithm.  Majorly, we use THREE types of Asymptotic Notations and those are as follows...
Big - Oh (O)
Big - Omega (Ω)
Big - Theta (Θ)
Big - Oh Notation (O)
Big - Oh notation is used to define the upper bound of an algorithm in terms of Time Complexity.  That means Big - Oh notation always indicates the maximum time required by an algorithm for all input values. That means Big - Oh notation describes the worst case of an algorithm time complexity.  Big - Oh Notation can be defined as follows...
Consider function f(n) the time complexity of an algorithm and g(n) is the most significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as O(g(n)).

f(n) = O(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis





In above graph after a particular input value n0, always C g(n) is greater than f(n) which indicates the algorithm's upper bound.
Example
Consider the following f(n) and g(n)... f(n) = 3n + 2 g(n) = n If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C x g(n) for all values of C > 0 and n0>= 1
f(n) <= C g(n) ⇒3n + 2 <= C n  Above condition is always TRUE for all values of C = 4 and n >= 2.  By using Big - Oh notation we can represent the time complexity as follows... 3n + 2 = O(n)
Big - Omege Notation (Ω)
Big - Omega notation is used to define the lower bound of an algorithm in terms of Time Complexity.  That means Big - Omega notation always indicates the minimum time required by an algorithm for all input values. That means Big - Omega notation describes the best case of an algorithm time complexity.  Big - Omega Notation can be defined as follows...
Consider function f(n) the time complexity of an algorithm and g(n) is the most significant term. If f(n) >= C x g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis






In above graph after a particular input value n0, always C x g(n) is less than f(n) which indicates the algorithm's lower bound.
Example
Consider the following f(n) and g(n)... f(n) = 3n + 2 g(n) = n If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of C > 0 and n0>= 1
f(n) >= C g(n) ⇒3n + 2 <= C n  Above condition is always TRUE for all values of C = 1 and n >= 1.  By using Big - Omega notation we can represent the time complexity as follows... 3n + 2 = Ω(n)
Big - Theta Notation (Θ)
Big - Theta notation is used to define the average bound of an algorithm in terms of Time Complexity.  That means Big - Theta notation always indicates the average time required by an algorithm for all input values. That means Big - Theta notation describes the average case of an algorithm time complexity.  Big - Theta Notation can be defined as follows...
Consider function f(n) the time complexity of an algorithm and g(n) is the most significant term. If C1 g(n) <= f(n) >= C2 g(n) for all n >= n0, C1, C2 > 0 and n0 >= 1. Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis

In 





above graph after a particular input value n0, always C1 g(n) is less than f(n) and C2 g(n) is greater than f(n) which indicates the algorithm's average bound.
Example
Consider the following f(n) and g(n)... f(n) = 3n + 2 g(n) = n If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) >= C2 g(n) for all values of C1, C2 > 0 and n0>= 1
C1 g(n) <= f(n) >= ⇒C2 g(n) C1 n <= 3n + 2 >= C2 n  Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 1.  By using Big - Theta notation we can represent the time compexity as follows... 3n + 2 = Θ(n)

DIVIDE-AND-CONQUER
GENERAL METHOD:
                Given a function to compute on n inputs the divide-and-conquer strategy suggests, 
Divide : Splitting the inputs into k distinct subsets, 1< k ≤n , yielding k 
sub problems.  
Conquer : These Sub-problems must be solved, and then 
Combine : a method must be found to combine sub-solutions into a 
solution of the whole.  
Note : If the sub-problems are still relatively large, then the divide-and-conquer strategy can possibly be reapplied.

                              We can write a control abstraction that mirrors the way an algorithm based on divide-and-conquer will look.

DAndC(Algorithm)is initially invoked as DAndC(P), where 

P is the problem to be solved.
Small(P) is a Boolean-valued function that determines whether the input size is small enough that the answer can be computed without splitting.  
If this is so, the function S is invoked.  Otherwise the problem P is divided into smaller sub-problems.  
These sub-problems P1,P2,,Pk  are solved by recursive applications of DAndC.  
Combine is a function that determines the solution to P using the solutions to the k sub-problems.  
If the size of P is n and the sizes of the k sub-problems are n1,n2,.,nk,  respectively, then the computing time of DAndC is described by the recurrence relation.

Algorithm: control abstraction for divide-and conquer
                                                                                                                                                            
1 Algorithm DAndC(P)
2 {
3     If small(P) then return S(P);
4     else
5     {
6          divide P into smaller instances P1,P2,,Pk, k>=1;
7          Apply DAndC to each of these subproblems;
8          return Combine(DAndC(P1),DAndC(P2), DAndC(Pk));
9     }
10 }
                
The complexity of many divide-and-conquer algorithms is given by recurrences of the form(General method)
                         
    T(n)=      T(1)                  n=1
                                                aT(n/b)+f(n)   n>1






where a and b are known constants.  We assume that T(1) is known and n is a power of b
(i.e., n=bk).

Example:    
                     Consider the case in which a = 2 and b = 2.  Let T(1) = 2  and f(n) = n. 
We have

                                T(n) =  2T(n/2) + n
                                        =  2[2T(n/4) + n/2] + n
                                        =  4T(n/4) + 2n
                                        =  4[2T(n/8) + n/4] + 2n
                                        =  8T(n/8) + 3n
Is in the form 23T(n/23)+3n
  = 2kT(n/2k)+kn
As we know that n is a power of b that is,
  n=bk. [:.where b = 2;]
apply log on both sides, then,
n=bk. [:.where b = 2;]
=> log n = log2k
=> k = log2 n.  [substitute the k value in the above k-th 
term.]
T(n) = 2log2nT(n/ 2log2n) + nlog2 n, corresponding to the choice of k= log2 n.
        = nT(n/n)+n log2n
= nT(1)+ n log2n
= 2n+ n log2n
* as in the rule of Big-Oh, if f(n) is a polynomial of degree k, then f(n) is O(nk), ie., drop lower-order terms and constant factors.  
= 2 n + n log2n
= O(n logn).
Analysis of QuickSort
Best Case: Each (recursive) Partition call will split the array in halves. We would get the familiar recurrence for the running time:
T(n) = 2T(n/2) + O(n) = O(n log n):
Worst Case: Partition always causes an unbalanced split (all elements going to one of the subarrays). The recurrence we get in this case is
T(n) = T(n -1) + O(n);
which solves to T(n) = O(n2).

Comments