System Design & Architecture

Why Finding “Nearby” Is Harder Than You Think—And How Geohashes Solve It

Suppose you are building a location-based service like Yelp or DoorDash. A customer wants to find restaurants near them that match certain preferences. Their device provides a GPS location (latitude, longitude), and your app needs to quickly retrieve relevant restaurants in their vicinity.

But how do you efficiently query locations in a database? Traditional indexing methods, such as B-Trees, are optimized for sorting and searching one-dimensional data (e.g., numbers or strings). However, geographic locations exist in two dimensions, making proximity-based searches challenging.

A naive approach—scanning all locations and filtering those within a given radius—is inefficient at scale. To handle this, modern applications use geospatial indexes, which structure location data in a way that enables efficient retrieval of nearby points.

One such indexing method is geohashing, which encodes latitude and longitude into a compact, searchable format that allows for efficient range queries and neighbor lookups. In this article, we’ll explore how geohashing works, how to encode and decode locations, and how it is used in real-world applications like ride-sharing and food delivery services.

Read more →
Data Structures & Algorithms

The Linked Hashmap Blueprint

An illustration of a LinkHashMap showing a hash function, an array/vector holding doubly linked list and the map/dictionary state.
Hash maps (or hash tables) are foundational implementations of the dictionary data structure, natively supported by most high-level programming languages. They provide an efficient way to store and retrieve records using unique identifiers, such as keys, and are widely used in scenarios where random access and fast lookups are required.

For instance, consider a pharmacy like CVS or Walgreens, which uses patients’ Social Security Numbers (SSNs)—a nine-digit unique identifier—to manage patient information. Not all patients visit the pharmacy regularly, so it would be inefficient to store data in a large array indexed by SSNs. Instead, dictionaries allow us to store, retrieve, or delete patient information efficiently, even when the SSNs are sparsely distributed.

Read more →
Programming Patterns & Languages

Promises in Python

What Are Promises? #

In JavaScript, Promises are objects that represent the eventual completion (or failure) of an asynchronous operation and its resulting value. They provide a powerful way to manage asynchronous code, enabling developers to write cleaner and more maintainable logic.

Promises allow us to associate handlers for both the success and failure of an asynchronous operation. By treating asynchronous code similarly to synchronous code, they reduce the complexity and improve the readability of workflows that would otherwise be riddled with convoluted callbacks—commonly referred to as “callback hell.”

Read more →
Programming Patterns & Languages

From Jekyll to Hugo

Hugo logo

I recently moved my blog from Jekyll, a Ruby-based static site generator, to Hugo, a popular alternative built in Golang. In this article, I’ll walk through the rationale behind this migration, share the steps I took, and include a few custom code snippets I created along the way. This is not intended as a tutorial on setting up a blog with Hugo — there are plenty of excellent videos and documentation on Hugo’s site for that. Instead, I’ll focus on my experience, insights, and the lessons learned that may be helpful for anyone considering a similar transition.

You could find an older version of my site at archive.

Read more →
Data Structures & Algorithms

Understanding Skip lists

Suppose you have a sorted collection of elements and need to perform add, delete, and search operations efficiently. Skip lists offer an efficient, probabilistic approach to these operations, achieving an average time complexity of \(O(\log{n})\) for search, insertion, and deletion. While other data structures, such as red-black trees and AVL trees, can provide the same \(O(\log{n})\) efficiency guarantees in both the average and worst cases, skip lists have the advantage of being simpler to implement and understand.

With basic data structures like sorted arrays, you can search for elements in \(O(\log{n})\) time using binary search. However, insertion and deletion require shifting elements, leading to \(O(n)\) time complexity for these operations. Conversely, linked lists allow efficient \(O(1)\) insertions and deletions once the target location is found, but finding that location requires \(O(n)\) time in the worst case.

How Skip Lists improve efficiency #

So, how do skip lists achieve \(O(\log{n})\) efficiency for all three operations? Skip lists accomplish this by introducing multiple levels, each serving as an “express lane” that allows you to skip over sections of the list. The highest levels contain fewer nodes, allowing you to make large jumps, while the lowest level contains all nodes, allowing for precise adjustments when needed. This structure enables fast traversal through the express lanes and, when necessary, you can exit to a lower level for finer-grained searching.

Read more →