Python Algorithms

Greedy algorithms aim to make the optimal choice at that given moment. Each step it chooses the optimal choice, without knowing the future. It attempts to find the globally optimal way to solve the entire problem using this method.

Why Are Greedy Algorithms Called Greedy?#

We call algorithms greedy when they utilise the greedy property. The greedy property is:

In this article I shared the solution of 10 Python algorithms that are frequently asked problems in coding interview rounds. If you are preparing an interview with a well-known tech Company this article is a good starting point to get familiar with common algorithmic patterns and then move to more complex questions. The API below differs from textbook heap algorithms in two aspects: (a) We use zero-based indexing. This makes the relationship between the index for a node and the indexes for its children slightly less obvious, but is more suitable since Python uses zero-based indexing. Python’s Built-In Sorting Algorithm The Python language, like many other high-level programming languages, offers the ability to sort data out of the box using sorted. Code language: Python (python) With this, we completed the Data Structures and now let’s move to the Algorithms part of Data Structures and Algorithms with Python. Data Structures and Algorithms: Algorithms. This section covers the complete knowledge of algorithms. An algorithm is a series of steps that can be taken to solve a problem.

At that exact moment in time, what is the optimal choice to make?

Greedy algorithms are greedy. They do not look into the future to decide the global optimal solution. They are only concerned with the optimal solution locally. This means that the overall optimal solution may differ from the solution the algorithm chooses.

They never look backwards at what they’ve done to see if they could optimise globally. This is the main difference between Greedy and Dynamic Programming.

Python

To be extra clear, one of the most Googled questions about greedy algorithms is:

“What problem-solving strategies don’t guarantee solutions but make efficient use of time?”

The answer is “Greedy algorithms”. They don’t guarantee solutions, but are very time efficient. However, in the next section we’ll learn that sometimes Greedy solutions give us the optimal solutions.

What Are Greedy Algorithms Used For?#

Greedy algorithms are quick. A lot faster than the two other alternatives (Divide & Conquer, and Dynamic Programming). They’re used because they’re fast.

Sometimes, Greedy algorithms give the global optimal solution everytime. Some of these algorithms are:

These algorithms are Greedy, and their Greedy solution gives the optimal solution.

We’re going to explore greedy algorithms using examples, and learning how it all works.

How Do I Create a Greedy Algorithm?#

Your algorithm needs to follow this property:

At that exact moment in time, what is the optimal choice to make?

And that’s it. There isn’t much to it. Greedy algorithms are easier to code than Divide & Conquer or Dynamic Programming.

Python Algorithms Book

Imagine you’re a vending machine. Someone gives you £1 and buys a drink for £0.70p. There’s no 30p coin in pound sterling, how do you calculate how much change to return?

For reference, this is the denomination of each coin in the UK:

The greedy algorithm starts from the highest denomination and works backwards. Our algorithm starts at £1. £1 is more than 30p, so it can’t use it. It does this for 50p. It reaches 20p. 20p < 30p, so it takes 1 20p.

The algorithm needs to return change of 10p. It tries 20p again, but 20p > 10p. It next goes to 10p. It chooses 1 10p, and now our return is 0 we stop the algorithm.

We return 1x20p and 1x10p.

This algorithm works well in real life. Let’s use another example, this time we have the denomination next to how many of that coin is in the machine, (denomination, how many).

The algorithm is asked to return change of 30p again. 100p (£1) is no. Same for 50. 20p, we can do that. We pick 1x 20p. We now need to return 10p. 20p has run out, so we move down 1.

10p has run out, so we move down 1.

We have 5p, so we choose 1x5p. We now need to return 5p. 5p has run out, so we move down one.

We choose 1 2p coin. We now need to return 3p. We choose another 2p coin. We now need to return 1p. We move down one.

We choose 1x 1p coin.

Our algorithm selected these coins to return as change:

Let’s code something. First, we need to define the problem. We’ll start with the denominations.

Now onto the core function. Given denominations and an amount to give change, we want to return a list of how many times that coin was returned.

If our denominations list is as above, [6, 3, 0, 0, 0, 0, 0] represents taking 6 1p coins and 3 2p coins, but 0 of all other coins.

We create a list, the size of denominations long and fill it with 0’s.

We want to loop backwards, from largest to smallest. Reversed(x) reverses x and lets us loop backwards. Enumerate means “for loop through this list, but keep the position in another variable”. In our example when we start the loop. coin = 100 and pos = 6.

Our next step is choosing a coin for as long as we can use that coin. If we need to give change = 40 we want our algorithm to choose 20, then 20 again until it can no longer use 20. We do this using a for loop.

While the coin can still fit into change, add that coin to our return list, toGiveBack and remove it from change.

The runtime of this algorithm is dominated by the 2 loops, thus it is $O(n^2)$.

Is Greedy Optimal? Does Greedy Always Work?#

It is optimal locally, but sometimes it isn’t optimal globally. In the change giving algorithm, we can force a point at which it isn’t optimal globally.

The algorithm for doing this is:

  • Pick 3 denominations of coins. 1p, x, and less than 2x but more than x.

We’ll pick 1, 15, 25.

  • Ask for change of 2 * second denomination (15)

We’ll ask for change of 30. Now, let’s see what our Greedy algorithm does.

It choses 1x 25p, and 5x 1p. The optimal solution is 2x 15p.

Our Greedy algorithm failed because it didn’t look at 15p. It looked at 25p and thought “yup, that fits. Let’s take it.”

It then looked at 15p and thought “that doesn’t fit, let’s move on”.

Python Algorithms Library

This is an example of where Greedy Algorithms fail.

To get around this, you would either have to create currency where this doesn’t work or to brute-force the solution. Or use Dynamic Programming.

Dijkstra’s algorithm finds the shortest path from a node to every other node in the graph. In our example, we’ll be using a weighted directed graph. Each edge has a direction, and each edge has a weight.

Dijkstra’s algorithm has many uses. It can be very useful within road networks where you need to find the fastest route to a place. We also use the algorithm for:

The algorithm follows these rules:

  1. Every time we want to visit a new node, we will choose the node with the smallest known distance.
  2. Once we’ve moved to the node, we check each of its neighbouring nodes. We calculate the distance from the neighbouring nodes to the root nodes by summing the cost of the edges that lead to that new node.
  3. If the distance to a node is less than a known distance, we’ll update the shortest distance.

Our first step is to pick the starting node. Let’s choose A. All the distances start at infinity, as we don’t know their distance until we reach a node that knows the distance.

We mark off A on our unvisited nodes list. The distance from A to A is 0. The distance from A to B is 4. The distance from A to C is 2. We updated our distance listing on the right-hand side.

We pick the smallest edge where the vertex hasn’t been chosen. The smallest edge is A -> C, and we haven’t chosen C yet. We visit C.

Notice how we’re picking the smallest distance from our current node to a node we haven’t visited yet. We’re being greedy. Here, the greedy method is the global optimal solution.

We can get to B from C. We now need to pick a minimum. $min(4, 2 + 1) = 3$.Since A -> C -> B is smaller than A -> B, we update B with this information. We then add in the distances from the other nodes we can now reach.Our next smallest vertex with a node we haven’t visited yet is B, with 3. We visit B.We do the same for B. Then we pick the smallest vertex we haven’t visited yet, D.We don’t update any of the distances this time. Our last node is then E.There are no updates again. To find the shortest path from A to the other nodes, we walk back through our graph.

We pick A first, C second, B third. If you need to create the shortest path from A to every other node as a graph, you can run this algorithm using a table on the right-hand side.

Dijkstra's Table
NodeDistance from APrevious node
A0N/A
B3C
C2A
D5B
E6B

Using this table it is easy to draw out the shortest distance from A to every other node in the graph:

Prim’s algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. It finds the optimal route from every node to every other node in the tree.

With a small change to Dijkstra’s algorithm, we can build a new algorithm - Prim’s algorithm!

We informally describe the algorithm as:

  1. Create a new tree with a single vertex (chosen randomly)
  2. Of all the edges not yet in the new tree, find the minimum weighted edge and transfer it to the new tree
  3. Repeat step 2 until all vertices are in the tree

We have this graph.

Our next step is to pick an arbitrary node.We pick the node A. We then examine all the edges connecting A to other vertices. Prim’s algorithm is greedy. That means it picks the shortest edge that connects to an unvisited vertex.

In our example, it picks B.We now look at all nodes reachable from A **and **B. This is the distinction between Dijkstra’s and Prim’s. With Dijkstra’s, we’re looking for a path from 1 node to a certain other node (nodes that have not been visited). With Prim’s, we want the minimum spanning tree.

We have 3 edges with equal weights of 3. We pick 1 randomly.It is helpful to highlight our graph as we go along, because it makes it easier to create the minimum spanning tree.Now we look at all edges of A, B, and C. The shortest edge is C > E with a weight of 1.And we repeat:The edge B > E with a weight of 3 is the smallest edge. However, both vertices are always in our VISITED list. Meaning we do not pick this edge. We instead choose C > F, as we have not visitedThe only node left is G, so let’s visit it.Note that if the edge weights are distinct, the minimum spanning tree is unique.We can add the edge weights to get the minimum spanning tree’s total edge weight:

$$2 + 3 + 3 + 1 + 6 + 9 = 24$$

Fractional Knapsack Problem Using Greedy Algorithm#

Imagine you are a thief. You break into the house of Judy Holliday - 1951 Oscar winner for Best Actress. Judy is a hoarder of gems. Judy’s house is lined to the brim with gems.

You brought with you a bag - a knapsack if you will. This bag has a weight of 7. You happened to have a listing of Judy’s items, from some insurance paper. The items read as:

Judy's Items
NameValueWeight
Diamonds165
Francium31
Sapphire62
Emerald21

The first step to solving the fractional knapsack problem is to calculate $frac{value}{weight}$ for each item.

Judy's Items
NameValueWeightValue / weight
Diamonds1653.2
Francium313
Sapphire623
Emerald212

And now we greedily select the largest ones. To do this, we can sort them according to $frac{value}{weight}$ in descending order. Luckily for us, they are already sorted. The largest one is 3.2.

Then we select Francium (I know it’s not a gem, but Judy is a bit strange 😉)

Now, we add Sapphire. But if we add Sapphire, our total weight will come to 8.

In the fractional knapsack problem, we can cut items up to take fractions of them. We have a weight of 1 left in the bag. Our sapphire is weight 2. We calculate the ratio of:

$$frac{weightofknapsackleft}{weightofitem}$$

And then multiply this ratio by the value of the item to get how much value of that item we can take.

$$frac{1}{2} * 6 = 3$$

The greedy algorithm can optimally solve the fractional knapsack problem, but it cannot optimally solve the {0, 1} knapsack problem. In this problem instead of taking a fraction of an item, you either take it {1} or you don’t {0}. To solve this, you need to use Dynamic Programming.

The runtime for this algorithm is O(n log n). Calculating $frac{value}{weight}$ is O(1). Our main step is sorting from largest $frac{value}{weight}$, which takes O(n log n) time.

Greedy vs Divide & Conquer vs Dynamic Programming#

Greedy vs Divide & Conquer vs Dynamic Programming
GreedyDivide & ConquerDynamic Programming
Optimises by making the best choice at the momentOptimises by breaking down a subproblem into simpler versions of itself and using multi-threading & recursion to solveSame as Divide and Conquer, but optimises by caching the answers to each subproblem as not to repeat the calculation twice.
Doesn't always find the optimal solution, but is very fastAlways finds the optimal solution, but is slower than GreedyAlways finds the optimal solution, but could be pointless on small datasets.
Requires almost no memoryRequires some memory to remember recursive callsRequires a lot of memory for memoisation / tabulation

Python Algorithms Pdf

To learn more about Divide & Conquer and Dynamic Programming, check out these 2 posts I wrote:

Conclusion#

Greedy algorithms are very fast, but may not provide the optimal solution. They are also easier to code than their counterparts.

Source code:Lib/heapq.py

This module provides an implementation of the heap queue algorithm, also knownas the priority queue algorithm.

Heaps are binary trees for which every parent node has a value less than orequal to any of its children. This implementation uses arrays for whichheap[k]<=heap[2*k+1] and heap[k]<=heap[2*k+2] for all k, countingelements from zero. For the sake of comparison, non-existing elements areconsidered to be infinite. The interesting property of a heap is that itssmallest element is always the root, heap[0].

The API below differs from textbook heap algorithms in two aspects: (a) We usezero-based indexing. This makes the relationship between the index for a nodeand the indexes for its children slightly less obvious, but is more suitablesince Python uses zero-based indexing. (b) Our pop method returns the smallestitem, not the largest (called a “min heap” in textbooks; a “max heap” is morecommon in texts because of its suitability for in-place sorting).

These two make it possible to view the heap as a regular Python list withoutsurprises: heap[0] is the smallest item, and heap.sort() maintains theheap invariant!

To create a heap, use a list initialized to [], or you can transform apopulated list into a heap via function heapify().

The following functions are provided:

heapq.heappush(heap, item)

Push the value item onto the heap, maintaining the heap invariant.

heapq.heappop(heap)

Pop and return the smallest item from the heap, maintaining the heapinvariant. If the heap is empty, IndexError is raised. To access thesmallest item without popping it, use heap[0].

heapq.heappushpop(heap, item)

Push item on the heap, then pop and return the smallest item from theheap. The combined action runs more efficiently than heappush()followed by a separate call to heappop().

heapq.heapify(x)

Transform list x into a heap, in-place, in linear time.

heapq.heapreplace(heap, item)

Pop and return the smallest item from the heap, and also push the new item.The heap size doesn’t change. If the heap is empty, IndexError is raised.

Python Algorithms Pdf

This one step operation is more efficient than a heappop() followed byheappush() and can be more appropriate when using a fixed-size heap.The pop/push combination always returns an element from the heap and replacesit with item.

The value returned may be larger than the item added. If that isn’tdesired, consider using heappushpop() instead. Its push/popcombination returns the smaller of the two values, leaving the larger valueon the heap.

The module also offers three general purpose functions based on heaps.

heapq.merge(*iterables, key=None, reverse=False)

Merge multiple sorted inputs into a single sorted output (for example, mergetimestamped entries from multiple log files). Returns an iteratorover the sorted values.

Similar to sorted(itertools.chain(*iterables)) but returns an iterable, doesnot pull the data into memory all at once, and assumes that each of the inputstreams is already sorted (smallest to largest).

Has two optional arguments which must be specified as keyword arguments.

key specifies a key function of one argument that is used toextract a comparison key from each input element. The default value isNone (compare the elements directly).

reverse is a boolean value. If set to True, then the input elementsare merged as if each comparison were reversed. To achieve behavior similarto sorted(itertools.chain(*iterables),reverse=True), all iterables mustbe sorted from largest to smallest.

Changed in version 3.5: Added the optional key and reverse parameters.

heapq.nlargest(n, iterable, key=None)

Return a list with the n largest elements from the dataset defined byiterable. key, if provided, specifies a function of one argument that isused to extract a comparison key from each element in iterable (for example,key=str.lower). Equivalent to: sorted(iterable,key=key,reverse=True)[:n].

heapq.nsmallest(n, iterable, key=None)

Return a list with the n smallest elements from the dataset defined byiterable. key, if provided, specifies a function of one argument that isused to extract a comparison key from each element in iterable (for example,key=str.lower). Equivalent to: sorted(iterable,key=key)[:n].

The latter two functions perform best for smaller values of n. For largervalues, it is more efficient to use the sorted() function. Also, whenn1, it is more efficient to use the built-in min() and max()functions. If repeated usage of these functions is required, consider turningthe iterable into an actual heap.

Basic Examples¶

A heapsort can be implemented bypushing all values onto a heap and then popping off the smallest values one at atime:

This is similar to sorted(iterable), but unlike sorted(), thisimplementation is not stable.

Heap elements can be tuples. This is useful for assigning comparison values(such as task priorities) alongside the main record being tracked:

Python Algorithms And Data Structures Pdf

Priority Queue Implementation Notes¶

A priority queue is common usefor a heap, and it presents several implementation challenges:

  • Sort stability: how do you get two tasks with equal priorities to be returnedin the order they were originally added?

  • Tuple comparison breaks for (priority, task) pairs if the priorities are equaland the tasks do not have a default comparison order.

  • If the priority of a task changes, how do you move it to a new position inthe heap?

  • Or if a pending task needs to be deleted, how do you find it and remove itfrom the queue?

A solution to the first two challenges is to store entries as 3-element listincluding the priority, an entry count, and the task. The entry count serves asa tie-breaker so that two tasks with the same priority are returned in the orderthey were added. And since no two entry counts are the same, the tuplecomparison will never attempt to directly compare two tasks.

Another solution to the problem of non-comparable tasks is to create a wrapperclass that ignores the task item and only compares the priority field:

The remaining challenges revolve around finding a pending task and makingchanges to its priority or removing it entirely. Finding a task can be donewith a dictionary pointing to an entry in the queue.

Removing the entry or changing its priority is more difficult because it wouldbreak the heap structure invariants. So, a possible solution is to mark theentry as removed and add a new entry with the revised priority:

Theory¶

Heaps are arrays for which a[k]<=a[2*k+1] and a[k]<=a[2*k+2] for allk, counting elements from 0. For the sake of comparison, non-existingelements are considered to be infinite. The interesting property of a heap isthat a[0] is always its smallest element.

The strange invariant above is meant to be an efficient memory representationfor a tournament. The numbers below are k, not a[k]:

In the tree above, each cell k is topping 2*k+1 and 2*k+2. In a usualbinary tournament we see in sports, each cell is the winner over the two cellsit tops, and we can trace the winner down the tree to see all opponents s/hehad. However, in many computer applications of such tournaments, we do not needto trace the history of a winner. To be more memory efficient, when a winner ispromoted, we try to replace it by something else at a lower level, and the rulebecomes that a cell and the two cells it tops contain three different items, butthe top cell “wins” over the two topped cells.

If this heap invariant is protected at all time, index 0 is clearly the overallwinner. The simplest algorithmic way to remove it and find the “next” winner isto move some loser (let’s say cell 30 in the diagram above) into the 0 position,and then percolate this new 0 down the tree, exchanging values, until theinvariant is re-established. This is clearly logarithmic on the total number ofitems in the tree. By iterating over all items, you get an O(n log n) sort.

A nice feature of this sort is that you can efficiently insert new items whilethe sort is going on, provided that the inserted items are not “better” than thelast 0’th element you extracted. This is especially useful in simulationcontexts, where the tree holds all incoming events, and the “win” conditionmeans the smallest scheduled time. When an event schedules other events forexecution, they are scheduled into the future, so they can easily go into theheap. So, a heap is a good structure for implementing schedulers (this is whatI used for my MIDI sequencer :-).

Various structures for implementing schedulers have been extensively studied,and heaps are good for this, as they are reasonably speedy, the speed is almostconstant, and the worst case is not much different than the average case.However, there are other representations which are more efficient overall, yetthe worst cases might be terrible.

Heaps are also very useful in big disk sorts. You most probably all know that abig sort implies producing “runs” (which are pre-sorted sequences, whose size isusually related to the amount of CPU memory), followed by a merging passes forthese runs, which merging is often very cleverly organised 1. It is veryimportant that the initial sort produces the longest runs possible. Tournamentsare a good way to achieve that. If, using all the memory available to hold atournament, you replace and percolate items that happen to fit the current run,you’ll produce runs which are twice the size of the memory for random input, andmuch better for input fuzzily ordered.

Moreover, if you output the 0’th item on disk and get an input which may not fitin the current tournament (because the value “wins” over the last output value),it cannot fit in the heap, so the size of the heap decreases. The freed memorycould be cleverly reused immediately for progressively building a second heap,which grows at exactly the same rate the first heap is melting. When the firstheap completely vanishes, you switch heaps and start a new run. Clever andquite effective!

In a word, heaps are useful memory structures to know. I use them in a fewapplications, and I think it is good to keep a ‘heap’ module around. :-)

Python Algorithms Trading

Footnotes

1

Python Algorithms And Data Structures

The disk balancing algorithms which are current, nowadays, are more annoyingthan clever, and this is a consequence of the seeking capabilities of the disks.On devices which cannot seek, like big tape drives, the story was quitedifferent, and one had to be very clever to ensure (far in advance) that eachtape movement will be the most effective possible (that is, will bestparticipate at “progressing” the merge). Some tapes were even able to readbackwards, and this was also used to avoid the rewinding time. Believe me, realgood tape sorts were quite spectacular to watch! From all times, sorting hasalways been a Great Art! :-)