AI Muses in the Afternoon: Experimenting with the Future

The past few days have found me back in the digital weeds, deep-diving into the latest evolution of AI. It’s been a whirlwind of “Audio Overviews”—essentially custom podcasts generated on a whim—and pushing the boundaries of what I can do with my own photography. I’ve even been feeding decades of genealogical research into these tools to see if I can coax out something resembling a cinematic video presentation.

With every iteration, the “aha!” moments come faster. I’m discovering sharper methods for presenting data and more efficient ways to prime the pump to get the AI started. I’ve even taken to using YouTube tutorials as direct sources for these podcasts, effectively letting the AI teach me how to use the AI. It’s a bit meta, but it’s certainly accelerating the learning curve.


Stop getting vanilla results from AI

The secret sauce, I’ve found, is all in the prompting. Most people get “vanilla” results because they ask “vanilla” questions. My recent deep dives have provided a comprehensive guide to maximizing these tools through advanced techniques.

I’ve been experimenting with the “prompt loophole”—the idea of repeating key instructions to force the model’s focus—and a four-step “prompt chain” designed to transform a single spark of an idea into a full spectrum of content angles. Whether it’s using Google Gemini for personalized assistance or leveraging NotebookLM for grounded, source-specific research, the goal is the same: automated media creation that actually feels substantive.


Finding an Image Style for Mountain Dreams

In between the data crunching, I’ve also spent time playing with aesthetics. I wanted to see if I could find a specific visual signature for the Appalachian side of my work by using my own photographs as the foundation. It’s been a fascinating experiment in style transfer and atmospheric layering.

Brinegar Cabin, Blue Ridge Parkway, North Carolina

Mabry Mill, Blue Ridge Parkway, Virginia

Putting the Ghost in the Machine

Lately, I’ve felt the itch to revisit the genealogy book I compiled back in 1998 for my Dad’s family reunion. In those days, I relied on a standalone genealogy program that, while clunky by today’s standards, produced these wonderful, data-rich reports. I hadn’t realized how much I missed that format until I started looking back. The bulk of that original book was filled with Ahnentafel Reports—those structured, generation-by-generation mini-biographies of every nuclear family.

Trying to rework that project with nearly 30 years of additional research has been a bit of a struggle. Modern online databases are great for searching, but their printing and reporting capabilities are honestly a letdown compared to those old-school programs.

To bridge the gap, I started building a NotebookLM workspace. I uploaded PDFs of family pages printed directly from Ancestry; they aren’t pretty to look at, but they provided the “grounded” data the AI needs to understand the family tree. Since I still maintain the Facebook group for that same reunion circle, I’ve been testing the new Cinematic Video Overview feature to see how it handles our history. I’ve even experimented with the Audio Overviews to create narrated soundtracks for videos featuring our restored and colorized family photographs.

It feels a bit like putting the ghost back into the machine—taking cold data and giving it a voice and a picture.