Taming the Chaos

Finding, using, Doing

Your package’s journey starts long before you click “Buy Now.”

Somewhere in a warehouse the size of a small town, your item sits among millions of others—and yet it can be located in seconds. Workers and robots don’t browse aisles; they’re guided by a system that sorts products into a just-organized-enough system. The best place for a phone charger might be next to dog toys and paper towels, not because that makes intuitive sense, but because it makes the math work.

High-demand items live close to picker stations so the things people reach for most often are physically closer, shaving seconds that add up to hours saved. And across the building, a quiet choreography is already underway: conveyor belts timing their merges, robots selecting optimal routes, and trucks being loaded on schedules tight enough that missing a window could ripple across half the country.

Our “simple” order just addressed the same trio of principles we’ll be considering today, from Algorithms to Live BySorting, Caching, and Scheduling. Amazon, like any massive online retailer, has scaled their systems up to millions of items and impossible deadlines.

What about at home? We sort our laundry, keep the TV remote on the coffee table, and decide whether to do the dishes, answer an email, or finish the report first. That’s sorting, caching, and scheduling on the home-front. Nothing new here! They’ve been around forever.

Here’s the thing to remember: sorting, caching, and scheduling are not the work that needs to be done…they’re the meta-work. They are work, but not the work.

Sorting is not the goal. Finding is.
Caching is not the goal. Using is.
Scheduling is not the goal. Doing is.

Computer science spent half a century proving what warehouse workers and parents already knew: the optimal amount of meta-work is shockingly small. While setting up the right system is important, the best systems feel a little chaotic. The winning move is almost always to stop one level sooner than feels respectable.

Here is the heresy, stated plainly:

  • Sorting exists only so you can stop searching. 
  • Caching exists only so you can start using what you need. 
  • Scheduling exists only so you can finish the right task. 

Anything beyond that minimum is vanity, procrastination, or fear wearing a productivity mask. To be fair, these bold assertions aren’t stated in Algorithms to Live By, there was more focus on the history and details of specific strategies/algorithms, but I thought cutting through to the “why bother in the first place” could provide the incentive to understand each area a bit better. If you want to improve finding, using, or starting the right things, you know what area to address!

Sorting: Paying Upfront So the Future Is Faster

We sort because we are willing to pay a steep one-time cost today to buy lightning-fast retrieval tomorrow. Every act of sorting is an investment in future speed, mental calm, and the ability to scale without chaos or collapse. Computers do it with data, but humans have always done it with everything else: possessions, relationships, opportunities, even people. We sort books on shelves, clothes in closets, emails into folders, friends into inner circles, employees into performance tiers, athletes into starting lineups, and entire nations into mental rankings of power or prosperity. The underlying motive is identical—turn the slow, expensive process of relative judgment (“Is this better than that?”) into the instant certainty of absolute position (“It belongs here”).

This principle scales from the trivial to the foundational. The modern world arguably began with an act of sorting: Herman Hollerith’s punch-card tabulating machine sorted the 1890 U.S. Census so efficiently that it won the contract, birthed IBM, and launched the information age. Google is less a search engine than history’s largest sorting engine; they shoulder the astronomical upfront cost of crawling, indexing, and ranking the entire web so that you pay almost nothing—0.2 seconds and one click—when you need an answer. Companies pre-sort résumés into “yes/maybe/no” piles, and societies pre-sort individuals by credentials, net worth, or social capital, all for the same reason: future decisions must be fast when stakes are high or volume is large.

We go so far as to invent measurement systems precisely to make sorting cheaper and more reliable:

  • Countries are sorted by GDP or HDI, 
  • colors by RGB or HSV values, 
  • professional teams by wins/losses or point differentials, 
  • athletes by goals, assists, RBI, ERA, touchdowns, or advanced metrics like WAR and PER. 
  • startups are sorted by traction, revenue, or founder pedigree; 
  • potential partners are sorted by trustworthiness, competence, and alignment.

Even the subtle, abstract hierarchies of status and respect are quietly sorted by signals—titles, follower counts, invitations to exclusive rooms—so that we can navigate social worlds without re-evaluating every person from scratch every time. Whenever the same comparison must be made repeatedly or when the cost of a wrong decision is high, we create a metric, a leaderboard, or a category system that turns fuzzy relative judgments into crisp, reusable order.

Not all sorting is equal, and we intuitively adjust effort to context. But let’s be explicit how to best use sorting to our benefit.

Ask three questions before you touch a single item, file, email, relationship, or hiring decision:

  1. How often will I (or someone else) need to retrieve or compare things in this set? 
  2. How bad is the damage if I pick the wrong one or decide too slowly? 
  3. How big is the collection now, and how fast is it growing?

Armed with those answers we can choose our sorting strategy from the four levels below.

Level 1 - Don’t sort at all:
Use when: retrieval is rare AND stakes are trivial.
Examples: the box of old cables in the garage, the “maybe someday” bookmarks folder, sentimental keepsakes, maybe even your bookshelf Cost of sorting > future benefit forever. Chaos is cheaper.
Strategy: relax

Level 2 – Loose buckets (2–5 coarse categories)
Use when: retrieval is occasional OR stakes are low-to-medium.
Examples: “Inbox / Action / Archive,” “Wear Soon / Off-season / Donate,” “Hot / Warm / Cold leads.” Almost zero maintenance, instantly understandable by anyone (including future you on a bad day).
Strategy: group "like" items into relatively same-sized buckets

Level 3 – Clean ordinal ranking (simple order, no numbers)
Use when: retrieval is frequent OR stakes are medium-high, but differences in degree don’t matter.
Examples: to-do list ordered by true priority, music playlists, ranked backlog in a startup, emergency contact list.
Strategy: Pairwise comparison (“Does this go before or after that?”) is shockingly cheap and almost always enough.

Level 4 – High-fidelity cardinal sort (actual scores, metrics, sub-categories)
Use only when BOTH of these are true:

  • Retrieval is extremely frequent or automated (hundreds/thousands of times per day)
  • The magnitude of difference actually changes decisions (e.g., investment models, medical triage scores, ad bidding, GDP comparisons, fantasy-sports drafting) 

Examples: GDP, Google’s PageRank, a professional sports draft board, or your company’s compensation bands. It is expensive for a reason; reserve it for the tiny handful of domains where ratios, thresholds, or precise gaps matter.
Strategy: This is where the research goes: Bubble Sort, Insertion Sort, Merge Sort. Links will follow, but most software has very efficient algorithms built in for you.

Over-Sorting

Without doubt, sorting feels like progress and control, so the default human tendency is to jump to Level 4 too soon. The cost of sorting grows super-linearly (n log n in theory, often worse in psychology and maintenance). When in doubt, stay one level coarser than your perfectionist brain wants. Sorting is all up-front costs for a future that you anticipate. Do the minimum that keeps tomorrow effortless, never the maximum that feels virtuous today.

If you’re looking for specific techniques for sorting, there are a few good ones to look into to get an understanding of the complexities and challenges when dealing with small sets of comparisons versus large data sets. Bubble Sort, Insertion Sort, Merge Sort, and Bucket Sort were discussed in the book. I’ll include links below.

Caching: The Art of Strategic Forgetting

At any moment you can only keep a handful of things in close proximity. Everything else has to live somewhere slower: a desk drawer, a phone app, the cloud, a basement box, storage locker across town. In computers, we usually think of the primary resource as memory – or storage for information that we want to use now.

The human brain, like a computer, has strictly limited “fast memory.” When you were 16, you knew everyone’s name – but “everyone” was 100 people. Now, you have 300 friends on Facebook, you’ve lived in 3 cities, had 8 jobs, 500 contacts on LinkedIn, and have 6 favorite sports teams with an ever-rotating roster. It now takes you a minute to associate a name with a face. Instant access turns to “remembering”. As such, forgetting is not a moral failing or a glitch; it is the only mechanism that keeps the system from choking on its own volume. This is caching.

The entire game of personal organization is about deciding what belongs in arm’s reach right now and what can be deliberately put aside, or forgotten, until the day it’s needed again. This, too, is caching.

Look around your house and you’ll see the universal memory hierarchy in physical form. The phone on your nightstand is Level 1 (L1) cache: instant access, tiny capacity. The kitchen counter and the chair where you drop your clothes are L2. The bedroom closet is main RAM. The labeled plastic totes in the garage or basement are disk storage. The off-site storage unit or your parents’ attic is archival tape. Each layer is 10–100× slower to retrieve from but 10–100× larger, exactly like a modern CPU. You already run this hierarchy intuitively; the trick is to make it intentional.

Real life is full of surprisingly sophisticated caching algorithms once you start noticing them. The goal here is to determine what should be shifted from the faster, easier, more convenient cache to the slower, larger, cheaper, cache. The technical term for determining what goes, is the “eviction policy”.

Here are 2 common strategies that you likely employ and they each have their place:

  1. FIFO (First In, First Out) governs the milk and eggs in the fridge and the winter coats that go to the attic every April. The oldest one, regardless of how much it has been used, is the candidate for removal. 
  2. LRU: Least Recently Used. Evict whatever you haven’t touched in the longest time. It’s brutally simple and almost always optimal.

To be honest, LRU almost feels magical. My favorite real-world LRU system is one I stumbled into years ago and only later learned has cousins in the design world (the Noguchi Filing System for papers, or the “closet waterfall” trick for clothes). I like to hang my clothes, it’s fast and wrinkle free; however, my closet space is limited and I didn’t want to just keep buying more hangers and packing my clothes tighter and tighter! So, I have exactly the same number of hangers as, say, shirts. On laundry day, I always hang my newly cleaned shirt on the extreme right.

Over weeks and months the shirts I actually wear stay toward the right side of the closet while the ones I don’t touch slowly migrate to the left. Now, when I acquire a new shirt it’s eviction time! The eviction decision is practically already made for me: the shirts on the far left are where I start looking at as candidates for the donation pile. Zero agonizing, zero spreadsheets, zero time investment. As a bonus, the right half of my closet is automatically sorted by true preferences—my current favorites are always within arm’s reach on busy mornings.

Finding the right cache amounts and frequency of swapping is tricky, too. There are always trade-offs. Replace the cache too frequently and having a fast layer becomes slower than the slow layer was to begin with: the cluttered desk, the phone homescreen with eight pages of apps, the kitchen counter that disappears under mail and gadgets. Cache too little and you thrash—endlessly running to the basement, scrolling through 4,000 photos to find one, or opening 47 tabs because you’re afraid to close anything. What about a bigger high speed cache? That’s a kitchen reno project to get more counter space – doable, but expensive!

The cleanest solution is usually a hard capacity limit: one bulletin board, one “inbox” tray, 15 browser tabs, 25 hangers, three shelves of “current” books. A fixed boundary turns a vague intention (“I should clean this up”) into automatic policy.

How to Cache Like a Computer Scientist

Step 1: Define your cache layers clearly
Arm’s reach → Same room → Same house → Off-site/cloud. Label them if necessary. Knowing the tiers prevents you from treating the kitchen counter like archival storage, when you have a perfectly good “junk drawer”.

Step 2: Pick an eviction policy and stick to it
LRU beats everything else 90% of the time. Make it physical when possible: right-side closet rule, one in-one out for coffee mugs, single “on deck” shelf for books.

Step 3: Enforce hard quotas on the fast layers 12 items on the desk. 20 tabs. 5 projects on the whiteboard. A limit is a feature, not a bug—it forces the system to forget the right things.

Step 4: Schedule preventative “garbage collection”
Even if you’re not hitting the hard limits, it might make sense to open space today for something new that is likely coming. Schedule a few minutes and pre-emptively “evict” whatever has naturally fallen to the bottom of the LRU pile. In my neighborhood, our municipality literally schedules this weekly – it eliminates the need for loading up a trailer and heading to the dump!

Remember, the goal is not an empty desk or a minimalist closet; the goal is to have exactly the right things in arm’s reach the moment you need them, and to trust that everything else has been strategically forgotten until the day it earns its way back in.

Scheduling: The Upfront Task on Picking Tasks.

Scheduling is, perhaps, the most seductive meta-work of all because it feels like working while producing exactly zero output. Don’t get me wrong, it’s very important! I expect that’s why it’s such a seductive task.

However, we need to remember that we schedule for one brutally practical reason: to get the things done that actually need to be done, in the exact timeframe they need to be done. Nothing more, nothing less.

A perfect day is not the busiest day or the emptiest day—it’s the day when every promise, obligation, and ambition lands on time without heroic last-minute scrambling. If we do it right, it helps prioritize effectively by sequencing all our tasks in the best order to accomplish our most important objectives. While sorting is the upfront tax you pay for quick searches later, scheduling is the upfront tax you pay so that the list of future accomplishments matches your intentions.

A Few Key Strategies

The simplest and most powerful rule is Shortest Processing Time first. When dozens of small tasks are clogging the pipeline, knock out whatever can get done the most quickly. One quick reply, one fast signature, one 90-second filing—the queue shrinks faster than any other method, and the rest of the day suddenly has breathing room. David Allen’s famous “2-minute rule” in Getting Things Done (GTD) is this algorithm wearing a friendly face: if it can be finished faster than you can comfortably defer it, finish it now.

When hard deadlines exist, the only rational rule becomes Earliest Due Date first. You work on whatever will blow up soonest if ignored. Francesco Cirillo’s Pomodoro Technique works not because 25 minutes is magic, but because the ticking clock forces you to confront the nearest explosion before it happens. The fixed intervals are just guiderails; the real impact is the relentless focus on “What must be shipped or delivered next?”

For everything that matters but doesn’t scream, “Hurry!”, we upgrade to Weighted Shortest Job First (or simply “highest impact per hour”). Eisenhower’s urgent-important matrix, Stephen Covey’s Quadrant II emphasis, and the 100-year-old Ivy Lee Method (write tomorrow’s six highest-value tasks in strict order and execute top-down) are all the same algorithm: multiply importance by urgency, then do the highest product first.

Something to consider as you plan tasks is your schedule type. Paul Graham named the split perfectly in his 2009 essay “Maker’s Schedule, Manager’s Schedule.” Are you a “Maker” or a “Manager”?

Of course, you may be asking why we would need to differentiate between the schedule types. Context switching. Context switching is the time, energy, and effort required to move from your current task to the next one.

Makers are writers, coders, designers, researchers who need long, uninterrupted blocks to produce anything that isn’t mediocre.

Managers are parents, executives, customer-support heroes, and (of course) managers who live on a different operating system with their calendar carved into 15–30-minute slices.

In “Maker Mode”, every switch costs 10-40 minutes to put down what you’ve been working on, marking where you left off, picking up the new task, finding where you left off, and then getting your head back into the space to work on it.

In “Manager Mode”, there’s usually a much smaller cost that makes 15-30 minute meetings/tasks actually valuable.

Most people aren’t 100% maker or 100% manager – almost everyone has deep work that needs prolonged focus. The practical solution is batching: set aside significant maker blocks (like a half-day), to get “deep” work done and then cluster manager obligations into the remaining windows. A good schedule is mostly a tax-avoidance scheme against context-switching waste.

Choosing the Right Strategy

Run this 60-second check whenever your system starts leaking obligations (or are just feeling a bit overwhelmed):

What is currently late or about to be late?
→ Switch to Earliest Due Date until you’re clear.

Are you drowning in 1–5 minute tasks (or small tasks, relative to the rest)?
→ Activate the 2-minute rule (Shortest Processing Time) until the backlog disappears.

Are you on time with everything but still not optimizing time, revenue, interest, or some other criteria?
→ Move to Weighted Priority (Eisenhower / Ivy Lee / Covey Quadrant II) immediately.

Is your deep work constantly fragmented?
Batching is your friend by blocking out 3–4 hour “maker” chunks. You will have to defend them!

Remember, while it may feel like having a precise schedule is the way to go, scheduling is only meta-work – it’s not the work. Do the right amount and let scheduling become the quiet mechanism that simply ensures the right things get done exactly when they need to be done—no drama, no heroics, no guilt.

Bringing the Three Together:
Enough Order, Not Perfect Order

Sorting, caching, and scheduling are not separate productivity hacks; they are the three interdependent layers that keep modern life from collapsing under its own weight.

Sorting pays once so tomorrow’s search costs almost nothing. 
Caching decides what deserves to live in the priceless top layer of your attention and space, then ruthlessly evicts everything else.
Scheduling decides the exact moment each surviving item finally gets its turn in the spotlight.

Remember Amazon’s warehouse? It runs on all three at planetary scale: items are pre-sorted by velocity, high-demand goods are cached closest to the packers, and every conveyor belt and truck is scheduled to the second. At a human scale, your kitchen, closet, and calendar run on the exact same principles. The algorithms are identical; only the stakes and the volume change.

Do too little and chaos quietly compounds into missed deadlines, lost objects, and chronic overwhelm. Do too much and you spend your life tuning systems instead of living inside them.

The lasting insight from computer science is surprisingly gentle: the optimal solution is almost never perfect order.

Sorting, Caching, and Scheduling are meta-work for the real work of Finding, Using, and Doing. When it comes to meta-work, do the smallest amount that prevents real pain. Tweak the algorithm and feel the calm, then move on with your life, and keep Aiming Up!

Aiming Up Logo

Digging Deeper

Sorting

Caching

Scheduling (aka Prioritization)