We've reached the topic of sorting algorithms and their efficiency. There are factors in our code such as nested for/while loops which increase the run time of a function. To represent the efficiency of a function, we've introduced the "Big O notation" which describes the performance/complexity of an algorithm. For example, an algorithm with O(1) means that it will always finish executing in the same time no matter the input size. O(n^2) shows that the algorithm performance is proportional to the square of the size of the input. The same goes for O(n^3), O(n^4) etc.
Some sorting algorithms introduced this week include selection sort, merge sort, and quick sort. Quick sort's algorithm works in the following way:
1. Finds a random "pivot" element in the list
2. check if the first element of the list is less than the pivot element. If it is less, then move to the left side, if more, move to the right.
3. Quick sort is then called again on both the right and left side.
4. once sorted, left side + pivot + right side.
The run time of quick sort is O(nlogn)/O(nlogn)/O(n^2) with the run times representing best/average/worst cases respectively.
Merge sort takes the list and divides it in half. and then merge together the two sorted lists.
The run time of Merge sort is O(nlogn) for Best/Avg/Worst case run times.
Selection starts at the beginning of the list, swaps the smallest value in the list and swaps it with the List[i]. This continues until the end of the list. The run time of Selection sort is o(n^2) for best/avg/worst case run times.
The topic of Big O and algorithm analysis tie in together nicely with what I've been learning in CSC 165h.
No comments:
Post a Comment