Making Honesty the Best Policy
Amid the flood of daily interactions, wouldn’t it lighten your load if your world would run smoothly on trust and transparency? That would be a game changer. From Algorithms to Live By by Brian Christian and Tom Griffiths, the networking and game theory sections inspire the process of making honesty the best policy. In technical terms, we start with network connections, we identify network issues, and we engineer optimal communication management protocols. In human terms, we make connections with the people around us, we identify the challenges and overload issues, and we chart a course to make communication fluid and productive. Robust communications are vital. Emerging issues must be flagged. Good designs promote candidness while easing the mental toll of endless what-ifs and the need to guess what others are actually thinking.
A thorny problem that exists in any information exchange when there isn’t a constant, open line of communication, is confirming that the message was received and understood. An example from the book was the “General Problem”, where two Generals need to attack an enemy stronghold at the same time from opposite sides to be successful.
Each General needs to confirm the other is ready to advance. Communications are down, so they need to send a soldier across dangerous terrain to communicate their level of preparedness and a time to advance. General A sends a soldier to indicate they’re ready to go at a specific time. How will General A know that General B received the message–that the soldier made it? General B could send the soldier back. But how would General B know that General A received that message and that they could advance together? They don’t want to advance separately. General B wouldn’t know that General A received that message unless General A sent an acknowledgement of that message, too. This could go on for a while. At what point can the loop end?
Clearly, that’s a familiar challenge we experience every day with emails, texts, or any other form of communication that doesn’t include a face-to-face interaction, relying on “packets” of information. Computer scientists deal with this with communications over networks. How do computer networks handle it? The clever solution that was proposed and is still in use today, is what we commonly refer to, in computer jargon, as TCP.
Conceptually, this is how it works:
- Sender sends a packet (with a sequence/message number).
- Sender sets a retransmission timeout (RTO) timer for that packet.
- Receiver sends back an ACK (acknowledgement) confirming receipt, acknowledging the sequence number they received, and providing their own sequential message number.
- If the ACK arrives before the timer (RTO) expires → success; timer cancels, no retransmit of the specific message/packet..
- If the timer expires, the RTO, without ACK → assume loss, and the Sender will retransmit the packet. This continues until the Sender receives the ACK from the Receiver for the specific message sequence number.
- The loop is closed when the Sender sends a specific final message (FIN) and that is acknowledged. At the same time the Receiver sends a FIN and continues to retransmit that message after timeouts until it receives the ACK from the Sender.
- General A sender sends a soldier with a sequence number to the message (#1).
- General A knows it takes 2 hours each way for the soldier. So allowing a bit of margin, 5 hours is a reasonable RTO.
- General B sends back an ACK (acknowledgement) confirming the receipt of Message 1.
- If the soldier returns before the 5 hours are up, General A knows the message got through.
- If 5 hours have passed, and the soldier isn’t back with ACK of Message 1, General A will send another soldier with Message 1.
- This continues, sending sequenced messages, back and forth until they have all the information they need to advance. At this time, a FIN message is sent by one of the Generals for the other to acknowledge. At the same time the FIN is ACK’d, the other General party sends a FIN, waiting for an ACK. Again, if they don’t receive the ACK in a timely manner, they send the FIN again, assuming it was lost.
The essentials are knowing the sequence of messages, having the ACKs, and knowing when each party has finished communicating by continuously asking for the ACKs, if it hasn’t yet been received.
Of course, with all the packets and ACKs being passed around, networks can get noisy. There are tools and mechanisms that help to prevent all the noise and reduce congestion. The key point is communication is tricky but we have mechanisms to ensure the messages get through!
With that in mind, think of any communication exchange as a flow of signals: words, intentions, and emotions moving through channels that can get noisy fast. Networking protocols keep data reliable over shaky lines, and we can borrow those ideas to reduce dropped messages in our own communications.
Robust communications matter because weak ones let small misunderstandings grow into bigger headaches. We spend time chasing clarifications or patching trust that never needed testing in the first place. The fix starts with simple checks borrowed from networking tools. Take acknowledgments, or ACKs: after sending a message, ask lightly, “Did that land okay?” If silence follows, retry with growing pauses—exponential backoff—so you don’t flood the other side. Wait a minute first, then two, then four, then eight…and so on. This turns quiet stretches from worry into just another glitch to handle calmly.
One more networking trick worth stealing is the sliding window approach: share a few related points at once, then pause for a quick check-in rather than waiting for full confirmation on every single item. It keeps momentum without risking a pile-up of unaddressed confusion.
As connections grow, strain shows up. Spotting it early keeps minor hiccups from turning into breakdowns.
Latency, the time it takes to get the ACK, is an obvious flag—replies that used to come quickly now drag. Buffer bloat is subtler: all the messages are received, but the receiver can’t execute on them fast enough to keep up and get them out of the queue. For example, when email threads slow from hours to days, it could be a sign the other person’s mental queue is full. Inconsistent pacing makes planning feel shaky. Messages taken out of context or repeated “wait, what did you mean?” loops and lead to sudden defensiveness and point to misaligned intents piling stress on everyone.
These signs aren’t accusations; they’re signals the channel is congested. Ignoring them forces constant second-guessing: “Are they upset? Busy? Ignoring me?” That mental churn is exactly what we want to reduce.
Networking’s answer is AIMD—Additive Increase, Multiplicative Decrease. Ramp up slowly when things flow, but cut back sharply at the first hint of trouble. When communication seems to be going well, slowly adding to the information processing required is fine. But when communications become heated, halve your side of the conversation and listen more.
The other side of the coin is exponential backoff. When the discussion is heated and AIMD is being exercised by your partner and you can feel the latency, you can decrease the retry rate after a few misfires, giving space for the system to clear and to be processed by your partner.
If you were to graph this communication pattern it looks like a sawtooth; there’s a gradual increase in data followed by sudden, extreme drops until the communication volume can be adequately managed. That’s the sawtooth. The reason for the communication issues could be buffer bloat or new players in the communication mix. Consider what happens when you’re having a conversation with someone and another person enters the room, looking to talk with one of you. My data volume needs to drop quickly to accommodate the new partner. By cutting it in half, by default, it instantly reduces latency and with “additive increase”, it isn’t long until maximum throughput is achieved again. It’s a pattern that doesn’t result in perfect smoothness; it’s stable fairness that assumes temporary overload.
Getting the technical side of communications down is one thing. Getting intent right is trickier! And then add into that mix: personal agendas, lies, deceit, or just a lack of complete messaging. Game theory talks about common knowledge—that chain of “I know you know I know.” Think “iocaine powder” in The Princess Bride. Left unchecked, it spirals into exhausting loops. The practical move is to say assumptions out loud: “I’m reading this as you wanting to wrap up by Friday—is that right?” In group settings, confirming that everyone knows that everyone else knows the key details—like a shared deadline—heads off last-minute chaos.
The real payoff comes when we redesign the game itself so honesty, clarity, and candor becomes the clearest winning move. Not fraud. Not secrets. Not deception. Most scenarios practically invite deception, or at least not full disclosure. Classic haggling is a perfect example: buyers downplay interest to lower prices, sellers puff up value, and both sides waste energy bluffing because it’s a one-shot zero-sum round. Workplaces can fall into the same trap with opaque bonus calculations or annual reviews where everyone inflates accomplishments, expecting others to do the same.
A concept to be familiar with is the “Nash Equilibrium". This is where participants in a game/scenario can’t improve their outcome by unilaterally changing their strategy. Two classic examples are: “The Prisoner’s Dilemma” and “The Tragedy of the Commons”. In these scenarios, the best immediate outcome has no long term benefit. And visa-versa. In our examples of haggling and workplace embellishment, the result is a Nash equilibrium of mild dishonesty—nobody gains by being the only truthful one, so trust erodes and managers spend time verifying instead of building.
Asymmetric information makes it worse. Think used-car sales where sellers know the flaws but buyers don’t—the “market for lemons” problem. Honest sellers get driven out, buyers assume the worst, and everyone over-thinks every detail. Or typical “eBay” type auctions suffer a similar shortcoming. I have to bid higher than what I think you think an item is worth, not just what I think it’s worth. If I do place a bid at my perceived value, as soon as I lose my position, I have to wonder if the current top bidder knows something I don’t know to make it more valuable. If I counter that bid with a higher bid, will they think the same thing?
Good designs flip the incentives. I was recently introduced to the concept of the Vickrey auction. I love the concept.
A Vickrey auction is a sealed-bid auction where:
- All bidders submit their bids simultaneously and privately (sealed — no one sees others' bids).
- The highest bidder wins the item.
- The winner pays the price of the second-highest bid (not their own bid).
This is also known as a second-price sealed-bid auction.
In a simple example, there are 3 sealed bids:
- Bidder 1- $100,
- Bidder 2 - $80,
- Bidder 3 - $60.
- The winner would be Bidder 1.
- But they'd pay the 2nd highest bid: $80.
I love it because bidding your true valuation is the dominant strategy!
- If you bid higher than your true value, you risk winning and paying more than it's worth to you.
- If you bid lower, you risk losing when you could have won profitably, below what you perceive as the value.
So, honesty is always optimal, regardless of what others do.
Honesty is the best policy. Bidding your true value is dominant—you can’t do better by lying. No regret if you win (you didn’t overpay) and no regret if you lose (someone genuinely valued it more).
William Vickrey, after whom Vickrey auctions are named, was a Nobel Prize-winning economist who showed that truthful bidding can be a dominant strategy in certain auctions. Roger Myerson, another Nobel laureate, helped formalize the Revelation Principle, which proves that “any game that requires strategically masking the truth can be transformed into a game that requires nothing but simple honesty.”
I can’t help but agree with Myerson’s colleague, Paul Milgrom: “It’s one of those results that as you look at it from different sides, on the one side, it’s just absolutely shocking and amazing, and on the other side, it’s trivial. And that’s totally wonderful, it’s so awesome: that’s how you know you’re looking at one of the best things you can see.” (Christian, Brian; Griffiths, Tom. Algorithms to Live By: The Computer Science of Human Decisions (p. 253). Penguin Canada. Kindle Edition.)
In short: We can always redesign the rules so honest reporting is a stable, self-enforcing choice.
It sounds great, but is it realistic? What does this look like in “real life”?
- Repeated interactions and long-term relationships are important; short-term relationships require a history of truthfulness and honesty.
- Consistent candor earns priority or flexibility next time. Platforms like ride-sharing or freelance sites bake this in with ratings, making reliability the path of least resistance.
- Commitment devices add another layer. Escrow in deals, or upfront non-refundable deposits in group plans, remove the temptation to back out later. Both sides can relax because the structure enforces follow-through.
- Even simple aggregated anonymous feedback can shift dynamics. Which is why “verified purchasers” are a thing on eCommerce sites.
None of these are magic bullets—they need agreement upfront—but they consistently outperform setups that punish vulnerability. The common thread is aligning self-interest with openness: shared wins for candor, clear costs for games.
When this happens, deceitful games drop away, real priorities surface, and mental effort shrinks.
Again, “Don’t hate the player, hate the game”. In fact, this is our opportunity to change the game!
Test any design by asking: if everyone acts truthfully, do we all come out ahead? If one person deviates alone, do they lose ground? When the answer is yes, the equilibrium tilts cooperative, and the mental overhead of constant strategizing fades. When people can speak freely and deceit risks group consequences, the game has changed for the better.
Mother Theresa implores us to be “honest and transparent” even when it makes us vulnerable. What if we could change the game to make honesty and transparency the very things that makes the best strategy work and is the best for everyone? Talk about Aiming Up!
For Further Study & Investigation
- The Two Generals' Problem: An exploration of the classic coordination challenge in distributed systems and its extensions, including implications for blockchain consensus protocols.
- Evolution of TCP and Modern Alternatives: A historical overview of TCP/IP development and a comparison with newer protocols like QUIC that improve handling of latency and congestion.
- Exponential Backoff and AIMD in Practice: An investigation into real-world uses of these congestion control mechanisms in networks and their psychological parallels in human conflict resolution.
- Nash Equilibrium and Classic Dilemmas: An analysis of foundational game theory concepts like the Prisoner's Dilemma, including iterated versions that promote long-term cooperation.
- Vickrey Auctions and Auction Theory: A study of second-price sealed-bid auctions and their real-world applications in settings like spectrum sales and online advertising platforms.
- The Revelation Principle and Incentive-Compatible Mechanisms: An examination of how mechanism design can transform games to make truthful revelation the dominant strategy.
- Asymmetric Information and the Market for Lemons: A review of George Akerlof's model of adverse selection and modern solutions like signaling and screening in various markets.
- Redesigning Games for Honesty in Daily Life: An exploration of behavioral tools such as commitment devices and reputation systems that align self-interest with openness in everyday interactions.
- Common Knowledge and Coordination Failures: A deeper look into the role of mutual knowledge in decision-making, drawing from philosophy, economics, and practical negotiation strategies.

