Tag: learning

  • Managing Parallel Work

    Git Worktrees Part 2

    Once you start using worktrees, the immediate problem becomes obvious: you have multiple branches evolving independently, and they all need to stay synchronized with your main branch. When a hotfix lands in production or someone merges a feature that touches the same code you’re working on, every active worktree potentially needs updating. Do this wrong and you’ll spend your afternoon resolving merge conflicts. Do it right and the synchronization becomes almost invisible.

    The key insight is that worktrees share a single git database, which means operations in one worktree affect all the others instantly. When you fetch new commits, they’re immediately available everywhere. When you create a branch, it shows up in every worktree. This shared state is what makes synchronization practical, but it also requires understanding which operations are local to a worktree and which are global to the repository.

    Keeping Your Base Branches Fresh

    The most common synchronization task is pulling updates from your remote repository into your local branches. The naive approach—running git pull in every worktree—works but creates unnecessary complexity. A better pattern treats one worktree as the source of truth for your base branches.

    Start by designating your main worktree as the place where you update shared branches. When you want to sync with the remote, navigate there and pull:

    Because all worktrees share the same git database, this updates the main branch everywhere. Your other worktrees don’t automatically check out the new commits, but the branch itself is updated. When you’re ready to integrate those changes into a feature branch, switch to that worktree and rebase:

    cd ../feature
    git rebase main

    This rebases your current branch onto the newly updated main. If you’ve been working on the new-auth-system branch in this worktree, the rebase incorporates all the latest main branch commits beneath your feature work.

    The reason this works cleanly is that rebasing is a per-branch operation, not a per-worktree operation. You’re not moving the worktree itself—you’re updating the branch that happens to be checked out there. Other worktrees tracking different branches are unaffected until you explicitly rebase them too.

    Handling Conflicts Across Worktrees

    When you rebase a branch in one worktree and hit conflicts, git stops and waits for you to resolve them. This happens entirely within that worktree’s context. Your other worktrees continue to work normally because they’re tracking different branches, and git’s conflict state is worktree-specific.

    Here’s where the isolation becomes valuable. If you’re mid-rebase in your feature worktree and get pulled into an urgent code review, you can just cd ../review and handle the review in a completely clean environment. The conflicted rebase in the feature worktree sits there waiting for you, exactly as you left it. When you return to resolve the conflicts, the process is standard git conflict resolution—fix the conflicting files, stage them, and continue the rebase:

    cd feature
    # Fix conflicts in the files git identified
    git add .
    git rebase --continue

    Once the rebase completes, that branch is synchronized with main. Your other feature branches in other worktrees still need their own rebases when you’re ready to update them.

    Integration Testing Across Features

    A powerful pattern enabled by worktrees is testing how multiple in-progress features work together before merging any of them. Create a dedicated integration worktree that exists purely for combining and testing branches:

    git worktree add integration -b integration main

    This creates a branch called integration based on main. Now you can merge multiple feature branches into it:

    cd integration
    git merge feature-auth
    git merge feature-billing
    git merge feature-notifications
    npm test

    This tests how all three features interact without touching any of the individual feature branches. If tests pass, you know the features are compatible. If tests fail, you know there’s an integration problem to solve before merging to main.

    The integration branch is disposable. After testing, you can delete it and recreate it fresh the next time you need integration testing. The pattern works because your feature branches remain untouched—you’re only testing a temporary combination.

    When you’re satisfied that features work together, merge them to main individually:

    cd ../main
    git merge feature-auth
    git push origin main
    git merge feature-billing
    git push origin main

    Each merge is a deliberate, tested step. This approach catches integration issues early while keeping your main branch clean and your feature branches focused.

    Cleaning Up Finished Work

    As you complete features and merge branches, worktrees accumulate. Some track branches that no longer exist. Others were created for one-time tasks and are no longer needed. Regular cleanup prevents your workspace from becoming cluttered with obsolete directories.

    When you’re done with a worktree, remove it:

    git worktree remove feature

    This deletes the working directory and unregisters the worktree. The branch itself remains in your repository—you’ve only removed the working directory where it was checked out. If the branch is also finished and merged, delete it separately:

    git branch -d new-auth-system

    The -d flag is safe because git prevents deleting unmerged branches. If you’re certain you want to delete an unmerged branch, use -D instead.

    Sometimes you’ll manually delete a worktree directory without using git worktree remove. Maybe you cleaned up your filesystem and forgot to tell git. When this happens, git still thinks the worktree exists. Clean up the stale metadata:

    git worktree prune

    This removes references to worktrees whose directories no longer exist. Running this periodically keeps your worktree list accurate.

    The Daily Rhythm

    After using worktrees for a while, a natural workflow emerges. Start your day by updating main in the main worktree. Rebase your active feature branches to incorporate those changes. When new work comes in, create feature branches in dedicated worktrees. When you need to review code, use your review worktree. When you finish work, merge to main and clean up the worktree.

    This rhythm eliminates the constant branch switching that fragments your attention. Each workspace maintains its own context. You move between them by changing directories, and your mental context switches cleanly because the filesystem itself shows you where you are.

    The mechanics are simple, but the effect is profound. You stop thinking about git as a sequence of checkouts and stashes and start thinking about it as a set of parallel workspaces. Each workspace evolves independently until you decide to synchronize them. The synchronization itself becomes a deliberate action rather than an automatic side effect of switching branches.

    That deliberateness is the real benefit. You control when contexts merge, when branches update, and when work moves between worktrees. Git stops interrupting your flow and starts supporting it.

  • Escape Branch Switching

    Escape Branch Switching

    Git Worktrees Part 1

    If you’ve ever been deep in debugging a feature branch when someone asks you to quickly review a pull request or fix a production bug, you know the pain. You can’t just switch branches—you’ve got uncommitted changes, half-finished work, and a mental context that will take ten minutes to rebuild when you come back. So you either commit incomplete work with a message like “WIP – will fix later” or you stash everything and hope you remember what you were doing.

    There’s a better way. Git worktrees let you check out multiple branches simultaneously, each in its own directory. No more branch switching. No more stashing. No more losing your place.

    What Worktrees Actually Are

    A worktree is just a working directory connected to your repository. When you clone a repository normally, you get one worktree—the directory where your files live and where git status shows what’s changed. Git worktrees let you create additional working directories, each with its own checked-out branch, all sharing the same underlying repository data.

    Think of it this way: your repository is a database of commits, branches, and history. A worktree is a view into that database, showing you one particular branch’s files. With multiple worktrees, you can have multiple views open at the same time.

    The key insight is that worktrees share everything except the working directory itself. Commits made in one worktree are immediately visible in all others. Branch updates propagate instantly. But each worktree has its own set of files, its own staging area, and its own checked-out branch.

    Why This Changes Everything

    The most immediate benefit is eliminating context switching. When you’re working on a feature in one worktree and need to review a PR, you don’t stop what you’re doing. You just cd ../review-worktree and check out the PR branch there. Your feature branch work sits untouched in its directory, exactly as you left it. When you’re done with the review, you cd back and continue where you left off.

    This is particularly powerful when you’re working with AI agents or automation. You can have one worktree where an agent is running tests, another where you’re actively developing, and a third where you’re reviewing someone else’s changes. Each workspace operates independently without the chaos of switching branches or managing multiple clones of the repository.

    The second major benefit is that worktrees prevent common git footguns. Ever accidentally committed to the wrong branch? Run a destructive rebase when you meant to be on a different branch? Worktrees make it physically obvious which branch you’re on because you’re literally in a different directory. The file path in your terminal shows you exactly where you are.

    Getting Started

    Here’s what the basic setup looks like. Instead of cloning normally, you start with a bare repository:

    git clone --bare git@github.com:user/repo.git repo.git
    cd repo.git

    A bare repository has no working directory—it’s just the git database. This positions all your worktrees as equals rather than having one “main” directory and several “linked” ones. Now create your first worktree:

    git worktree add main main

    This creates a directory called main and checks out the main branch there. The first argument is the directory name, the second is the branch to check out. Add a few more:

    git worktree add -b review review
    git worktree add -b hotfix hotfix

    Now you have three directories, all tracking the main branch. Wait—didn’t we say you can’t check out the same branch in multiple worktrees? That’s correct, and that’s where the pattern gets interesting. You don’t actually work directly on main in these worktrees. Instead, you create feature branches:

    cd review
    git checkout pr/123
    
    cd ../hotfix
    git checkout -b fix/urgent-bug hotfix

    The worktree directories are just namespaces. What matters is which branch is checked out inside them.

    The Core Workflow

    Once you have worktrees set up, your daily workflow changes in a subtle but significant way. Instead of using git checkout to switch contexts, you use cd to change directories. This feels weird at first because cd seems too simple for something as important as switching branches. But that simplicity is the point.

    When you want to work on a feature, you navigate to its worktree and create a branch. When you want to review something, you navigate to your review worktree and check out that branch. When you need to make a hotfix, you navigate to your hotfix worktree and branch from there. Each context stays isolated and preserved. The mental model shift is from “I have one workspace and I change what’s in it” to “I have multiple workspaces and I move between them.”

    What You Need to Know

    Each worktree requires its own project setup. If your project has dependencies that need installing, you’ll run npm install or the equivalent in each worktree. This sounds like overhead, but it’s actually a feature—each worktree can have different dependencies installed, matching the branch you’re working on. When you switch between worktrees, you’re not wondering if your node_modules match your current branch state.

    Disk space is not a major concern. Worktrees share the git object database, so you’re not duplicating your entire repository history multiple times. You’re just duplicating working directories and the specific files checked out on each branch.

    You also need to be aware that git commands run in a worktree context. When you run git status in a worktree, it shows the status of that worktree only. When you commit, you’re committing in that worktree’s context. But branch operations—creating branches, merging, rebasing—affect the shared repository and are visible across all worktrees.

    When Worktrees Make Sense

    Worktrees aren’t for everyone or every project. They make the most sense when you frequently switch contexts—reviewing PRs, jumping to urgent bugs, managing multiple features simultaneously. If you typically work on one thing at a time until it’s done, traditional branch switching might be simpler.

    The sweet spot is when you need to maintain multiple parallel workstreams in a single repository. This could be because you’re collaborating with others and need to review their work frequently. It could be because you’re managing both development and production hotfixes. Or it could be because you’re orchestrating multiple AI agents, each working on different aspects of the codebase simultaneously.

    In part two, we’ll dive into the practical patterns for managing these parallel workstreams, keeping them synchronized with your main branch, and merging work back together cleanly.

  • Why Your Boss Is Right About PHP’s empty()

    Why Your Boss Is Right About PHP’s empty()

    The function shows up everywhere: in Stack Overflow snippets, in AI-generated code, in that “quick fix” someone pushed at 5pm Friday. It’s convenient, it’s short, and it silently breaks your code in ways that are hard to debug. If your boss, your tech lead, or that senior developer keeps telling you not to use it, they’re right. Here’s why, written down so you never have to be reminded again.

    What empty() Actually Does

    Most developers know empty() checks for falsy values. What they might not realize is what it considers falsy:

    empty(null);        // true
    empty(false);       // true
    empty(0);           // true
    empty("0");         // true
    empty("");          // true
    empty([]);          // true
    empty($undefined);  // true

    That last line is the biggest problem. When empty() encounters an undefined variable, it returns true instead of throwing a warning. This is almost always unexpected behavior.

    The Real Danger

    Consider this WordPress code where a developer switches two letters:

    $payment_gateway = get_option('payment_gateway');
    
    if (empty($paymentGateway)) {  // Wrong variable name
        $payment_gateway = 'test_mode';
    }

    The code runs without error. The typo ships to production. Production charges start going to test mode. Any other check would have thrown an “undefined variable” warning and caught the bug immediately.

    Beyond typos, empty() masks function failures by treating them like empty values. WordPress’s get_post_meta() returns false when a post doesn’t exist, but returns an empty string when the field is genuinely empty. With empty(), these two very different states look identical:

    // Bad: Can't distinguish "not found" from "empty value"
    $price = get_post_meta($post_id, 'price', true);
    if (empty($price)) {
        return 'Price not set';  // But what if $post_id was invalid?
    }
    
    // Good: Check for false (error) separately from empty string
    $price = get_post_meta($post_id, 'price', true);
    if ($price === false) {
        return 'Product not found';  // Invalid post ID
    }
    if ($price === '' || $price === '0') {
        return 'Price not set';  // Legitimate empty value
    }

    What To Do Instead

    Use explicit checks.

    Writing if ($price === null || $price === '') is barely longer than if (empty($price)), but it generates warnings when you make typos instead of silently hiding them.

    The truthiness check !$var is better than empty() because it generates warnings for undefined variables, but whether it works correctly depends on your API’s design. Laravel’s Model::find() returns null for missing records and throws exceptions for actual errors, so if (!$user) works as expected. WordPress functions often return different falsy values for different purposes. For example, get_post_meta() returns false for an invalid post but an empty string for a missing field. In that case, !$price can’t distinguish between “the post doesn’t exist” and “the price is empty,” so you need explicit checks: if ($price === false) for errors versus if ($price === '') for empty values.

    The one exception is checking array keys that might not exist, like form submissions or configuration arrays. Though even here, null coalescing is usually clearer:

    // Works, but obscures what you're checking
    if (!empty($_POST['subscribe'])) {
        subscribe_user_to_newsletter();
    }
    
    // Clearer: explicitly provides a default for missing key
    if (($_POST['subscribe'] ?? false)) {
        subscribe_user_to_newsletter();
    }
    
    // WordPress: Feature flags in configuration
    $features = get_option('plugin_features', []);
    if (($features['beta_mode'] ?? false)) {
        enable_beta_features();
    }

    The ?? operator makes it obvious you’re providing a default for a potentially missing key.

    The Bottom Line

    empty() treats undefined variables as normal values, returning true when it should throw a warning. This silently swallows the errors that matter most: typos, refactoring mistakes, and function failures. These are the bugs you’ll debug at 2am wondering why there’s no error message, no stack trace, no indication anything went wrong. Use explicit checks instead and let your code fail loudly when something breaks.

  • How the Internet Finds Your Server

    How the Internet Finds Your Server

    As a web developer, I’ve been working on websites without really understanding DNS. DNS has been coming up more frequently at work and I realized it was time to dig deeper.

    This series is my journey to truly understand DNS from the ground up. If you’re a developer who’s ever felt lost when DNS issues arise, let’s learn together. We’ll start with the fundamentals and build up to troubleshooting complex issues like pros.

    The Problem DNS Solves

    Let’s start with the basics. Computers communicate using IP addresses – numbers like 192.0.2.1 or 2001:db8::1. But imagine telling your users to visit 192.0.2.1 to see your website. Not exactly memorable, right?

    DNS (Domain Name System) is the translation service that converts human-friendly names like hackipatch.com into the IP addresses that computers actually use. Think of it like your phone’s contacts app – you don’t memorize phone numbers anymore, you just tap on “Mom” and your phone handles the number.

    But here’s where it gets interesting: unlike your contacts app, DNS isn’t stored in one place. It’s distributed across thousands of servers worldwide, and the path from domain name to IP address involves multiple steps that can each fail in their own special way.

    The DNS Resolution Journey

    When you type example.com into your browser, here’s what actually happens:

    Step 1: Local Cache Check

    Your computer first checks if it already knows the answer. Operating systems cache DNS responses to speed things up. If you visited the site recently, you might get your answer here.

    Step 2: Recursive Resolver

    If there’s no cached answer, your computer asks a recursive resolver (usually your ISP’s DNS server or a public one like 8.8.8.8). This server does the heavy lifting of finding the answer for you.

    Step 3: Root Servers

    The resolver starts at the top of the DNS hierarchy with root servers. There are 13 root server addresses (a.root-servers.net through m.root-servers.net), though each address represents hundreds of actual servers worldwide.

    The root server doesn’t know where example.com is, but it knows who handles .com domains.

    Step 4: TLD Servers

    Next stop: the Top-Level Domain (TLD) servers. For .com, these servers are run by Verisign. The TLD server doesn’t know the IP for example.com either, but it knows which nameservers are authoritative for that domain.

    Step 5: Authoritative Nameservers

    Finally, we reach the authoritative nameservers for example.com. These servers have the actual DNS records and can give us the IP address.

    Step 6: The Response Journey

    The IP address travels back through the same path, getting cached at each step. That’s why DNS changes aren’t instant – all those caches need to expire.

    Your First DNS Query

    Enough theory. Let’s see this in action with dig, the Swiss Army knife of DNS tools.

    First, make sure you have dig installed:

    # macOS/Linux - usually pre-installed
    dig -v
    
    # Ubuntu/Debian
    sudo apt-get install dnsutils
    
    # macOS with Homebrew
    brew install bind

    Now let’s do a basic query:

    dig example.com

    You’ll see output like this:

    ; <<>> DiG 9.10.6 <<>> example.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54789
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;example.com.                   IN      A
    
    ;; ANSWER SECTION:
    example.com.            78641   IN      A       93.184.216.34
    
    ;; Query time: 28 msec
    ;; SERVER: 8.8.8.8#53(8.8.8.8)
    ;; WHEN: Thu Nov 07 10:23:45 PST 2024
    ;; MSG SIZE  rcvd: 56

    Let’s break down the important parts:

    • QUESTION SECTION: Shows what we asked for (A record for example.com)
    • ANSWER SECTION: The actual answer (IP address 93.184.216.34)
    • 78641: That’s the TTL in seconds (about 22 hours). This answer will be cached for that long
    • SERVER: Which DNS server answered us (8.8.8.8 in this case)

    Following the DNS Trail

    Want to see the full journey? Use dig +trace:

    dig +trace example.com

    This shows each step of the resolution:

    ; <<>> DiG 9.10.6 <<>> +trace example.com
    ;; global options: +cmd
    .                       2952    IN      NS      m.root-servers.net.
    .                       2952    IN      NS      a.root-servers.net.
    .                       2952    IN      NS      b.root-servers.net.
    [... more root servers ...]
    
    ;; Received 239 bytes from 8.8.8.8#53(8.8.8.8) in 28 ms
    
    com.                    172800  IN      NS      a.gtld-servers.net.
    com.                    172800  IN      NS      b.gtld-servers.net.
    [... more .com servers ...]
    
    ;; Received 1170 bytes from 192.5.5.241#53(f.root-servers.net) in 72 ms
    
    example.com.            172800  IN      NS      a.iana-servers.net.
    example.com.            172800  IN      NS      b.iana-servers.net.
    
    ;; Received 195 bytes from 192.52.178.30#53(k.gtld-servers.net) in 124 ms
    
    example.com.            86400   IN      A       93.184.216.34
    
    ;; Received 56 bytes from 199.43.135.53#53(a.iana-servers.net) in 20 ms

    Each section shows a hop in our journey:

    1. Root servers pointing to .com servers
    2. .com servers pointing to example.com‘s nameservers
    3. example.com‘s nameservers giving us the final IP

    Key Concepts That Matter

    TTL (Time To Live)

    TTL is how long (in seconds) a DNS answer can be cached. Lower TTL means changes propagate faster but more DNS queries. Higher TTL means better performance but slower updates.

    Common TTL values:

    • 300 (5 minutes): When you’re about to make changes
    • 3600 (1 hour): Good default for most records
    • 86400 (24 hours): For stable records that rarely change

    Authoritative vs Recursive Servers

    • Authoritative servers have the actual DNS records for a domain
    • Recursive servers find answers by asking other servers

    Your ISP runs recursive servers. Your domain registrar or DNS host runs authoritative servers.

    Why DNS Changes Aren’t Instant

    When you update a DNS record, here’s what needs to happen:

    1. Your authoritative servers get the update (instant)
    2. Cached answers need to expire (up to the old TTL)
    3. Some ISPs ignore TTL and cache longer (annoying but real)

    Pro tip: Lower your TTL a day before making changes, then raise it back after.

    Quick Wins

    Here are practical things you can do right now:

    Check if DNS is Your Problem

    # Compare your local DNS to Google's
    dig @8.8.8.8 yourdomain.com
    dig yourdomain.com
    
    # Check what authoritative servers say
    dig @ns1.yourdnshost.com yourdomain.com

    If these show different IPs, you’ve got a caching issue.

    See All Records for a Domain

    dig yourdomain.com ANY

    Note: Some servers restrict ANY queries now, but it’s worth trying.

    Debug Slow DNS

    # Time your DNS queries
    dig yourdomain.com | grep "Query time"
    
    # Try different DNS servers
    dig @1.1.1.1 yourdomain.com    # Cloudflare
    dig @8.8.8.8 yourdomain.com    # Google
    dig @9.9.9.9 yourdomain.com    # Quad9

    What’s Next?

    Now that you understand the basics of DNS resolution, you’re ready to dive into specific record types. In the next post, we’ll explore A, AAAA, and CNAME records – the workhorses of DNS that you’ll use daily.

    For now, try running dig +trace on your own domain. See if you can identify each step in the resolution process. And next time DNS issues pop up at work, you’ll know exactly where to start looking.

    Remember: DNS isn’t magic, it’s just a distributed phone book. Once you know how to read it, debugging becomes much simpler.