Category: programming

  • wp_unslash()

    A Legacy That Won’t Die

    If you’ve ever saved form data in WordPress and watched in horror as “O’Reilly” becomes “O\’Reilly” in your database, you’ve encountered one of the framework’s most confusing behaviors. This isn’t a bug. It’s a deliberate choice that WordPress makes every single time it boots, and understanding why requires traveling back to PHP’s most controversial feature.

    PHP’s Magic Quotes

    In the early 2000s, PHP introduced a feature called magic quotes. The reasoning seemed sound at the time: automatically escape all incoming user data to protect novice developers from SQL injection attacks. When a user submitted a form with “O’Reilly” in a text field, PHP would silently transform it into “O\’Reilly” before your code ever saw it. The backslash escaped the quote, theoretically making it safe to insert into SQL queries.

    The problem was that magic quotes solved one problem while creating dozens of others. Developers who knew what they were doing had to constantly check whether magic quotes were enabled on a particular server, leading to this pattern scattered throughout PHP applications:

    if ( get_magic_quotes_gpc() ) {
        $value = stripslashes( $_POST['name'] );
    } else {
        $value = $_POST['name'];
    }

    Different servers had different configurations. The same code behaved differently depending on whether the hosting provider had enabled magic quotes in php.ini. Worse, the feature didn’t actually provide real security, it was a band-aid on a wound that needed proper treatment through prepared statements and parameterized queries.

    The PHP community eventually recognized this mistake. PHP deprecated magic quotes in version 5.3, removed them in 5.4, and eliminated even the detection functions by PHP 8. The feature was universally acknowledged as a failed experiment, and good riddance.

    WordPress’s Fateful Decision

    Here’s where WordPress diverged from the rest of the PHP world. Instead of celebrating the death of magic quotes, WordPress decided to preserve them. Not just preserve them, actively implement them regardless of PHP version or configuration.

    This happens through a function called wp_magic_quotes() that runs during WordPress’s boot process. Open wp-settings.php and you’ll find it on line 587 (at the time of writing), executing after plugins load but before themes initialize. At this moment, WordPress deliberately adds slashes to every value in $_GET$_POST$_COOKIE, and $_SERVER.

    Why would WordPress do this? The answer lies in backward compatibility and the sheer scale of the ecosystem. By 2012, when magic quotes were removed from PHP, thousands of WordPress plugins and themes had been written assuming input data would arrive pre-escaped. Core WordPress functions expected slashed data. Removing this behavior would create security vulnerabilities throughout the ecosystem as code that expected escaped data suddenly received raw input.

    The WordPress core team faced an impossible choice: break backward compatibility and potentially create security holes in thousands of sites, or maintain the legacy behavior and confuse every new developer who encounters it. They chose compatibility. In their view, one confused developer is better than one compromised website.

    The Consequences We Live With

    This decision means that in 2025, long after PHP abandoned magic quotes, WordPress developers must still deal with automatically slashed data. Every time you access $_POST$_GET, or any other superglobal, WordPress has already modified it.

    This is where wp_unslash() enters the story. It’s WordPress’s official solution to its own deliberate slashing. Introduced in WordPress 3.6.0, the function is remarkably simple, it just calls stripslashes_deep(), which recursively removes backslashes from strings, arrays, and objects. But using it correctly requires understanding a pattern that’s easy to get wrong.

    Here’s what happens when you forget it:

    // Without wp_unslash - data gets corrupted
    $title = sanitize_text_field( $_POST['title'] );
    update_option( 'page_title', $title );
    // Database now contains: "O\\'Reilly"

    The correct pattern requires unslashing before sanitization:

    // With wp_unslash - data stays clean
    $title = sanitize_text_field( wp_unslash( $_POST['title'] ) );
    update_option( 'page_title', $title );
    // Database contains: "O'Reilly"

    The trap is subtle. Your code runs without errors. No exceptions get thrown. The data just silently arrives with extra backslashes, and you don’t notice until a client asks why their carefully typed content looks wrong.

    The situation gets more confusing because some WordPress functions expect slashed data while others don’t. Functions like wp_insert_post() and update_post_meta() expect their arguments to be pre-slashed, leading to this seemingly paradoxical pattern:

    $title = sanitize_text_field( wp_unslash( $_POST['title'] ) );
    wp_insert_post( wp_slash( array( 'post_title' => $title ) ) );

    You unslash the input, sanitize it, then slash it again before passing it to WordPress’s internal functions. This dance—unslash, clean, re-slash—appears throughout WordPress development, a constant reminder of the framework’s legacy.

    Why the REST API Chose Differently

    When WordPress introduced its REST API in version 4.4, the core team had an opportunity to break from this legacy. The REST API represents a modern interface, and forcing JSON data to follow PHP’s abandoned escaping conventions would be absurd.

    So the REST API makes a different choice. In WP_REST_Server::dispatch(), you’ll find this line:

    $request->set_query_params( wp_unslash( $_GET ) );

    The REST API unslashes all parameters immediately after receiving them, before any endpoint callbacks run. This means when you write a REST endpoint, you work with clean data from the start. No backslashes, no escaping artifacts, just the data as the client sent it.

    This architectural decision makes the REST API easier to work with than traditional WordPress form handling. It’s WordPress acknowledging that while backward compatibility demands maintaining the slash system for existing code, new APIs can and should work differently.

    Living With Legacy

    WordPress’s magic quotes system isn’t going anywhere. The backward compatibility concerns that justified it in 2012 still exist today. Removing it would break plugins, corrupt data, and create security vulnerabilities across millions of websites. The cost is too high, the benefit too uncertain.

    So we adapt. We learn the unslash-sanitize-slash pattern. We remember that wp_unslash() must come before sanitization. We document which functions expect slashed data and plan accordingly. We make this odd behavior second nature.

    The irony is that PHP’s failed experiment lives on in WordPress long after PHP itself moved on. Magic quotes died in the broader PHP world, but WordPress preserved them, frozen in code like an extinct species kept alive in captivity. Every call to wp_unslash() is a reminder that sometimes the hardest part of building software isn’t writing new features, it’s maintaining compatibility with decisions made fifteen years ago.

    Understanding wp_unslash() means understanding WordPress’s philosophy: backward compatibility trumps developer convenience. The framework will carry its history forward, even when that history includes other people’s mistakes. For developers, this means learning to work with WordPress as it is, not as we wish it would be.

    References

    WordPress Trac Tickets

    WordPress Functions

    • wp_unslash() – Wrapper that calls stripslashes_deep() (wp-includes/formatting.php)
    • stripslashes_deep() – Uses map_deep() to recursively remove slashes (wp-includes/formatting.php)
    • wp_magic_quotes() – Calls add_magic_quotes() on superglobals during boot (wp-includes/load.php)
    • add_magic_quotes() – Recursively applies addslashes() to arrays (wp-includes/functions.php)
  • Managing Parallel Work

    Git Worktrees Part 2

    Once you start using worktrees, the immediate problem becomes obvious: you have multiple branches evolving independently, and they all need to stay synchronized with your main branch. When a hotfix lands in production or someone merges a feature that touches the same code you’re working on, every active worktree potentially needs updating. Do this wrong and you’ll spend your afternoon resolving merge conflicts. Do it right and the synchronization becomes almost invisible.

    The key insight is that worktrees share a single git database, which means operations in one worktree affect all the others instantly. When you fetch new commits, they’re immediately available everywhere. When you create a branch, it shows up in every worktree. This shared state is what makes synchronization practical, but it also requires understanding which operations are local to a worktree and which are global to the repository.

    Keeping Your Base Branches Fresh

    The most common synchronization task is pulling updates from your remote repository into your local branches. The naive approach—running git pull in every worktree—works but creates unnecessary complexity. A better pattern treats one worktree as the source of truth for your base branches.

    Start by designating your main worktree as the place where you update shared branches. When you want to sync with the remote, navigate there and pull:

    Because all worktrees share the same git database, this updates the main branch everywhere. Your other worktrees don’t automatically check out the new commits, but the branch itself is updated. When you’re ready to integrate those changes into a feature branch, switch to that worktree and rebase:

    cd ../feature
    git rebase main

    This rebases your current branch onto the newly updated main. If you’ve been working on the new-auth-system branch in this worktree, the rebase incorporates all the latest main branch commits beneath your feature work.

    The reason this works cleanly is that rebasing is a per-branch operation, not a per-worktree operation. You’re not moving the worktree itself—you’re updating the branch that happens to be checked out there. Other worktrees tracking different branches are unaffected until you explicitly rebase them too.

    Handling Conflicts Across Worktrees

    When you rebase a branch in one worktree and hit conflicts, git stops and waits for you to resolve them. This happens entirely within that worktree’s context. Your other worktrees continue to work normally because they’re tracking different branches, and git’s conflict state is worktree-specific.

    Here’s where the isolation becomes valuable. If you’re mid-rebase in your feature worktree and get pulled into an urgent code review, you can just cd ../review and handle the review in a completely clean environment. The conflicted rebase in the feature worktree sits there waiting for you, exactly as you left it. When you return to resolve the conflicts, the process is standard git conflict resolution—fix the conflicting files, stage them, and continue the rebase:

    cd feature
    # Fix conflicts in the files git identified
    git add .
    git rebase --continue

    Once the rebase completes, that branch is synchronized with main. Your other feature branches in other worktrees still need their own rebases when you’re ready to update them.

    Integration Testing Across Features

    A powerful pattern enabled by worktrees is testing how multiple in-progress features work together before merging any of them. Create a dedicated integration worktree that exists purely for combining and testing branches:

    git worktree add integration -b integration main

    This creates a branch called integration based on main. Now you can merge multiple feature branches into it:

    cd integration
    git merge feature-auth
    git merge feature-billing
    git merge feature-notifications
    npm test

    This tests how all three features interact without touching any of the individual feature branches. If tests pass, you know the features are compatible. If tests fail, you know there’s an integration problem to solve before merging to main.

    The integration branch is disposable. After testing, you can delete it and recreate it fresh the next time you need integration testing. The pattern works because your feature branches remain untouched—you’re only testing a temporary combination.

    When you’re satisfied that features work together, merge them to main individually:

    cd ../main
    git merge feature-auth
    git push origin main
    git merge feature-billing
    git push origin main

    Each merge is a deliberate, tested step. This approach catches integration issues early while keeping your main branch clean and your feature branches focused.

    Cleaning Up Finished Work

    As you complete features and merge branches, worktrees accumulate. Some track branches that no longer exist. Others were created for one-time tasks and are no longer needed. Regular cleanup prevents your workspace from becoming cluttered with obsolete directories.

    When you’re done with a worktree, remove it:

    git worktree remove feature

    This deletes the working directory and unregisters the worktree. The branch itself remains in your repository—you’ve only removed the working directory where it was checked out. If the branch is also finished and merged, delete it separately:

    git branch -d new-auth-system

    The -d flag is safe because git prevents deleting unmerged branches. If you’re certain you want to delete an unmerged branch, use -D instead.

    Sometimes you’ll manually delete a worktree directory without using git worktree remove. Maybe you cleaned up your filesystem and forgot to tell git. When this happens, git still thinks the worktree exists. Clean up the stale metadata:

    git worktree prune

    This removes references to worktrees whose directories no longer exist. Running this periodically keeps your worktree list accurate.

    The Daily Rhythm

    After using worktrees for a while, a natural workflow emerges. Start your day by updating main in the main worktree. Rebase your active feature branches to incorporate those changes. When new work comes in, create feature branches in dedicated worktrees. When you need to review code, use your review worktree. When you finish work, merge to main and clean up the worktree.

    This rhythm eliminates the constant branch switching that fragments your attention. Each workspace maintains its own context. You move between them by changing directories, and your mental context switches cleanly because the filesystem itself shows you where you are.

    The mechanics are simple, but the effect is profound. You stop thinking about git as a sequence of checkouts and stashes and start thinking about it as a set of parallel workspaces. Each workspace evolves independently until you decide to synchronize them. The synchronization itself becomes a deliberate action rather than an automatic side effect of switching branches.

    That deliberateness is the real benefit. You control when contexts merge, when branches update, and when work moves between worktrees. Git stops interrupting your flow and starts supporting it.

  • Escape Branch Switching

    Escape Branch Switching

    Git Worktrees Part 1

    If you’ve ever been deep in debugging a feature branch when someone asks you to quickly review a pull request or fix a production bug, you know the pain. You can’t just switch branches—you’ve got uncommitted changes, half-finished work, and a mental context that will take ten minutes to rebuild when you come back. So you either commit incomplete work with a message like “WIP – will fix later” or you stash everything and hope you remember what you were doing.

    There’s a better way. Git worktrees let you check out multiple branches simultaneously, each in its own directory. No more branch switching. No more stashing. No more losing your place.

    What Worktrees Actually Are

    A worktree is just a working directory connected to your repository. When you clone a repository normally, you get one worktree—the directory where your files live and where git status shows what’s changed. Git worktrees let you create additional working directories, each with its own checked-out branch, all sharing the same underlying repository data.

    Think of it this way: your repository is a database of commits, branches, and history. A worktree is a view into that database, showing you one particular branch’s files. With multiple worktrees, you can have multiple views open at the same time.

    The key insight is that worktrees share everything except the working directory itself. Commits made in one worktree are immediately visible in all others. Branch updates propagate instantly. But each worktree has its own set of files, its own staging area, and its own checked-out branch.

    Why This Changes Everything

    The most immediate benefit is eliminating context switching. When you’re working on a feature in one worktree and need to review a PR, you don’t stop what you’re doing. You just cd ../review-worktree and check out the PR branch there. Your feature branch work sits untouched in its directory, exactly as you left it. When you’re done with the review, you cd back and continue where you left off.

    This is particularly powerful when you’re working with AI agents or automation. You can have one worktree where an agent is running tests, another where you’re actively developing, and a third where you’re reviewing someone else’s changes. Each workspace operates independently without the chaos of switching branches or managing multiple clones of the repository.

    The second major benefit is that worktrees prevent common git footguns. Ever accidentally committed to the wrong branch? Run a destructive rebase when you meant to be on a different branch? Worktrees make it physically obvious which branch you’re on because you’re literally in a different directory. The file path in your terminal shows you exactly where you are.

    Getting Started

    Here’s what the basic setup looks like. Instead of cloning normally, you start with a bare repository:

    git clone --bare git@github.com:user/repo.git repo.git
    cd repo.git

    A bare repository has no working directory—it’s just the git database. This positions all your worktrees as equals rather than having one “main” directory and several “linked” ones. Now create your first worktree:

    git worktree add main main

    This creates a directory called main and checks out the main branch there. The first argument is the directory name, the second is the branch to check out. Add a few more:

    git worktree add -b review review
    git worktree add -b hotfix hotfix

    Now you have three directories, all tracking the main branch. Wait—didn’t we say you can’t check out the same branch in multiple worktrees? That’s correct, and that’s where the pattern gets interesting. You don’t actually work directly on main in these worktrees. Instead, you create feature branches:

    cd review
    git checkout pr/123
    
    cd ../hotfix
    git checkout -b fix/urgent-bug hotfix

    The worktree directories are just namespaces. What matters is which branch is checked out inside them.

    The Core Workflow

    Once you have worktrees set up, your daily workflow changes in a subtle but significant way. Instead of using git checkout to switch contexts, you use cd to change directories. This feels weird at first because cd seems too simple for something as important as switching branches. But that simplicity is the point.

    When you want to work on a feature, you navigate to its worktree and create a branch. When you want to review something, you navigate to your review worktree and check out that branch. When you need to make a hotfix, you navigate to your hotfix worktree and branch from there. Each context stays isolated and preserved. The mental model shift is from “I have one workspace and I change what’s in it” to “I have multiple workspaces and I move between them.”

    What You Need to Know

    Each worktree requires its own project setup. If your project has dependencies that need installing, you’ll run npm install or the equivalent in each worktree. This sounds like overhead, but it’s actually a feature—each worktree can have different dependencies installed, matching the branch you’re working on. When you switch between worktrees, you’re not wondering if your node_modules match your current branch state.

    Disk space is not a major concern. Worktrees share the git object database, so you’re not duplicating your entire repository history multiple times. You’re just duplicating working directories and the specific files checked out on each branch.

    You also need to be aware that git commands run in a worktree context. When you run git status in a worktree, it shows the status of that worktree only. When you commit, you’re committing in that worktree’s context. But branch operations—creating branches, merging, rebasing—affect the shared repository and are visible across all worktrees.

    When Worktrees Make Sense

    Worktrees aren’t for everyone or every project. They make the most sense when you frequently switch contexts—reviewing PRs, jumping to urgent bugs, managing multiple features simultaneously. If you typically work on one thing at a time until it’s done, traditional branch switching might be simpler.

    The sweet spot is when you need to maintain multiple parallel workstreams in a single repository. This could be because you’re collaborating with others and need to review their work frequently. It could be because you’re managing both development and production hotfixes. Or it could be because you’re orchestrating multiple AI agents, each working on different aspects of the codebase simultaneously.

    In part two, we’ll dive into the practical patterns for managing these parallel workstreams, keeping them synchronized with your main branch, and merging work back together cleanly.

  • Why Your Boss Is Right About PHP’s empty()

    Why Your Boss Is Right About PHP’s empty()

    The function shows up everywhere: in Stack Overflow snippets, in AI-generated code, in that “quick fix” someone pushed at 5pm Friday. It’s convenient, it’s short, and it silently breaks your code in ways that are hard to debug. If your boss, your tech lead, or that senior developer keeps telling you not to use it, they’re right. Here’s why, written down so you never have to be reminded again.

    What empty() Actually Does

    Most developers know empty() checks for falsy values. What they might not realize is what it considers falsy:

    empty(null);        // true
    empty(false);       // true
    empty(0);           // true
    empty("0");         // true
    empty("");          // true
    empty([]);          // true
    empty($undefined);  // true

    That last line is the biggest problem. When empty() encounters an undefined variable, it returns true instead of throwing a warning. This is almost always unexpected behavior.

    The Real Danger

    Consider this WordPress code where a developer switches two letters:

    $payment_gateway = get_option('payment_gateway');
    
    if (empty($paymentGateway)) {  // Wrong variable name
        $payment_gateway = 'test_mode';
    }

    The code runs without error. The typo ships to production. Production charges start going to test mode. Any other check would have thrown an “undefined variable” warning and caught the bug immediately.

    Beyond typos, empty() masks function failures by treating them like empty values. WordPress’s get_post_meta() returns false when a post doesn’t exist, but returns an empty string when the field is genuinely empty. With empty(), these two very different states look identical:

    // Bad: Can't distinguish "not found" from "empty value"
    $price = get_post_meta($post_id, 'price', true);
    if (empty($price)) {
        return 'Price not set';  // But what if $post_id was invalid?
    }
    
    // Good: Check for false (error) separately from empty string
    $price = get_post_meta($post_id, 'price', true);
    if ($price === false) {
        return 'Product not found';  // Invalid post ID
    }
    if ($price === '' || $price === '0') {
        return 'Price not set';  // Legitimate empty value
    }

    What To Do Instead

    Use explicit checks.

    Writing if ($price === null || $price === '') is barely longer than if (empty($price)), but it generates warnings when you make typos instead of silently hiding them.

    The truthiness check !$var is better than empty() because it generates warnings for undefined variables, but whether it works correctly depends on your API’s design. Laravel’s Model::find() returns null for missing records and throws exceptions for actual errors, so if (!$user) works as expected. WordPress functions often return different falsy values for different purposes. For example, get_post_meta() returns false for an invalid post but an empty string for a missing field. In that case, !$price can’t distinguish between “the post doesn’t exist” and “the price is empty,” so you need explicit checks: if ($price === false) for errors versus if ($price === '') for empty values.

    The one exception is checking array keys that might not exist, like form submissions or configuration arrays. Though even here, null coalescing is usually clearer:

    // Works, but obscures what you're checking
    if (!empty($_POST['subscribe'])) {
        subscribe_user_to_newsletter();
    }
    
    // Clearer: explicitly provides a default for missing key
    if (($_POST['subscribe'] ?? false)) {
        subscribe_user_to_newsletter();
    }
    
    // WordPress: Feature flags in configuration
    $features = get_option('plugin_features', []);
    if (($features['beta_mode'] ?? false)) {
        enable_beta_features();
    }

    The ?? operator makes it obvious you’re providing a default for a potentially missing key.

    The Bottom Line

    empty() treats undefined variables as normal values, returning true when it should throw a warning. This silently swallows the errors that matter most: typos, refactoring mistakes, and function failures. These are the bugs you’ll debug at 2am wondering why there’s no error message, no stack trace, no indication anything went wrong. Use explicit checks instead and let your code fail loudly when something breaks.

  • How the Internet Finds Your Server

    How the Internet Finds Your Server

    As a web developer, I’ve been working on websites without really understanding DNS. DNS has been coming up more frequently at work and I realized it was time to dig deeper.

    This series is my journey to truly understand DNS from the ground up. If you’re a developer who’s ever felt lost when DNS issues arise, let’s learn together. We’ll start with the fundamentals and build up to troubleshooting complex issues like pros.

    The Problem DNS Solves

    Let’s start with the basics. Computers communicate using IP addresses – numbers like 192.0.2.1 or 2001:db8::1. But imagine telling your users to visit 192.0.2.1 to see your website. Not exactly memorable, right?

    DNS (Domain Name System) is the translation service that converts human-friendly names like hackipatch.com into the IP addresses that computers actually use. Think of it like your phone’s contacts app – you don’t memorize phone numbers anymore, you just tap on “Mom” and your phone handles the number.

    But here’s where it gets interesting: unlike your contacts app, DNS isn’t stored in one place. It’s distributed across thousands of servers worldwide, and the path from domain name to IP address involves multiple steps that can each fail in their own special way.

    The DNS Resolution Journey

    When you type example.com into your browser, here’s what actually happens:

    Step 1: Local Cache Check

    Your computer first checks if it already knows the answer. Operating systems cache DNS responses to speed things up. If you visited the site recently, you might get your answer here.

    Step 2: Recursive Resolver

    If there’s no cached answer, your computer asks a recursive resolver (usually your ISP’s DNS server or a public one like 8.8.8.8). This server does the heavy lifting of finding the answer for you.

    Step 3: Root Servers

    The resolver starts at the top of the DNS hierarchy with root servers. There are 13 root server addresses (a.root-servers.net through m.root-servers.net), though each address represents hundreds of actual servers worldwide.

    The root server doesn’t know where example.com is, but it knows who handles .com domains.

    Step 4: TLD Servers

    Next stop: the Top-Level Domain (TLD) servers. For .com, these servers are run by Verisign. The TLD server doesn’t know the IP for example.com either, but it knows which nameservers are authoritative for that domain.

    Step 5: Authoritative Nameservers

    Finally, we reach the authoritative nameservers for example.com. These servers have the actual DNS records and can give us the IP address.

    Step 6: The Response Journey

    The IP address travels back through the same path, getting cached at each step. That’s why DNS changes aren’t instant – all those caches need to expire.

    Your First DNS Query

    Enough theory. Let’s see this in action with dig, the Swiss Army knife of DNS tools.

    First, make sure you have dig installed:

    # macOS/Linux - usually pre-installed
    dig -v
    
    # Ubuntu/Debian
    sudo apt-get install dnsutils
    
    # macOS with Homebrew
    brew install bind

    Now let’s do a basic query:

    dig example.com

    You’ll see output like this:

    ; <<>> DiG 9.10.6 <<>> example.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54789
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;example.com.                   IN      A
    
    ;; ANSWER SECTION:
    example.com.            78641   IN      A       93.184.216.34
    
    ;; Query time: 28 msec
    ;; SERVER: 8.8.8.8#53(8.8.8.8)
    ;; WHEN: Thu Nov 07 10:23:45 PST 2024
    ;; MSG SIZE  rcvd: 56

    Let’s break down the important parts:

    • QUESTION SECTION: Shows what we asked for (A record for example.com)
    • ANSWER SECTION: The actual answer (IP address 93.184.216.34)
    • 78641: That’s the TTL in seconds (about 22 hours). This answer will be cached for that long
    • SERVER: Which DNS server answered us (8.8.8.8 in this case)

    Following the DNS Trail

    Want to see the full journey? Use dig +trace:

    dig +trace example.com

    This shows each step of the resolution:

    ; <<>> DiG 9.10.6 <<>> +trace example.com
    ;; global options: +cmd
    .                       2952    IN      NS      m.root-servers.net.
    .                       2952    IN      NS      a.root-servers.net.
    .                       2952    IN      NS      b.root-servers.net.
    [... more root servers ...]
    
    ;; Received 239 bytes from 8.8.8.8#53(8.8.8.8) in 28 ms
    
    com.                    172800  IN      NS      a.gtld-servers.net.
    com.                    172800  IN      NS      b.gtld-servers.net.
    [... more .com servers ...]
    
    ;; Received 1170 bytes from 192.5.5.241#53(f.root-servers.net) in 72 ms
    
    example.com.            172800  IN      NS      a.iana-servers.net.
    example.com.            172800  IN      NS      b.iana-servers.net.
    
    ;; Received 195 bytes from 192.52.178.30#53(k.gtld-servers.net) in 124 ms
    
    example.com.            86400   IN      A       93.184.216.34
    
    ;; Received 56 bytes from 199.43.135.53#53(a.iana-servers.net) in 20 ms

    Each section shows a hop in our journey:

    1. Root servers pointing to .com servers
    2. .com servers pointing to example.com‘s nameservers
    3. example.com‘s nameservers giving us the final IP

    Key Concepts That Matter

    TTL (Time To Live)

    TTL is how long (in seconds) a DNS answer can be cached. Lower TTL means changes propagate faster but more DNS queries. Higher TTL means better performance but slower updates.

    Common TTL values:

    • 300 (5 minutes): When you’re about to make changes
    • 3600 (1 hour): Good default for most records
    • 86400 (24 hours): For stable records that rarely change

    Authoritative vs Recursive Servers

    • Authoritative servers have the actual DNS records for a domain
    • Recursive servers find answers by asking other servers

    Your ISP runs recursive servers. Your domain registrar or DNS host runs authoritative servers.

    Why DNS Changes Aren’t Instant

    When you update a DNS record, here’s what needs to happen:

    1. Your authoritative servers get the update (instant)
    2. Cached answers need to expire (up to the old TTL)
    3. Some ISPs ignore TTL and cache longer (annoying but real)

    Pro tip: Lower your TTL a day before making changes, then raise it back after.

    Quick Wins

    Here are practical things you can do right now:

    Check if DNS is Your Problem

    # Compare your local DNS to Google's
    dig @8.8.8.8 yourdomain.com
    dig yourdomain.com
    
    # Check what authoritative servers say
    dig @ns1.yourdnshost.com yourdomain.com

    If these show different IPs, you’ve got a caching issue.

    See All Records for a Domain

    dig yourdomain.com ANY

    Note: Some servers restrict ANY queries now, but it’s worth trying.

    Debug Slow DNS

    # Time your DNS queries
    dig yourdomain.com | grep "Query time"
    
    # Try different DNS servers
    dig @1.1.1.1 yourdomain.com    # Cloudflare
    dig @8.8.8.8 yourdomain.com    # Google
    dig @9.9.9.9 yourdomain.com    # Quad9

    What’s Next?

    Now that you understand the basics of DNS resolution, you’re ready to dive into specific record types. In the next post, we’ll explore A, AAAA, and CNAME records – the workhorses of DNS that you’ll use daily.

    For now, try running dig +trace on your own domain. See if you can identify each step in the resolution process. And next time DNS issues pop up at work, you’ll know exactly where to start looking.

    Remember: DNS isn’t magic, it’s just a distributed phone book. Once you know how to read it, debugging becomes much simpler.