The idea behind this method is the insight that it is quicker to sort two small lists then merge them together, rather sort one big list in the first place.
It is quite an old algorithm having been developed in 1945 by John Von Neumann.
Merge sort can handle data being held on very slow sequential devices such as tape drives.
- If the list is 1 or less in length, then that list has been fully sorted
- If the list is more than 1 in length, then divide the unsorted list into roughly two parts. (An odd numbered length list can't be divided equally in two)
- Keep dividing the sub-lists until each one is only 1 item.
- Now merge the sub-lists back into a list twice their size, at the same time sorting each items into order
- Keep merging the sub-lists until the full list is complete once again.
So the idea is to keep dividing the list and then merge the items back again.
A picture of merge sort in action is shown below.
Advantage of merge sort
It is excellent for sorting slow-access data e.g. tape drive or hard disk.
It is excellent for sorting data that are normally accessed sequentially. e.g. linked lists, tape drive, hard disk and receiving online data one item at a time
Advantage over Quick sort
Better at handling sequential-accessed lists
If two equal valued items are in the list, then their relative locations are preserved (this is called "sort-stable") i.e. if item A = "cat" and item C = "cat" then the sorted list with have AC. Quick sort does not always keep this order - it could be AC or CA
In many implementations, if the list is N long, then it needs 2 x N memory space to handle the sort.
If recursion is used to code the algorithm then it uses twice as much stack memory as quick sort - on the other hand it is not difficult to code using iteration rather than recursion and so avoid the memory penalty.
Quick sort is considered the fastest method on most types of lists.
Challenge see if you can find out one extra fact on this topic that we haven't already told you
Click on this link: Merge sort algorithm