There’s a broker offering a new product, that I’d have been all over 30 years ago. I think they’re calling it a “generated asset,” where they create a personal stock index just for you (based on a prompt to an AI), and then create an imaginary index fund for that imaginary index, and then (I assume) invest in the underlying stocks on your behalf.
Of course, you could do that yourself, but it would be awkward, probably expensive, and definitely fiddly, with a need to track the index over time.
So, instead of that (I assume), the company just promises to pay you the value of your index, and (I assume) hedges its promise by holding the underlying shares—and since this synthetic investment is a service, they can lump all these promises together and invest in all the underlying shares, and be generally sure that their hedge will be close enough that even very large stock moves won’t cost them more than investors are paying for access to this service.
As I said, I’d have been all over that 30 years ago. Now, not so much. I’ve gotten lazier and more conservative with my investments as I’ve gotten older.
But it did make me think what I’d want to invest in, if I were to create a synthetic index, and it occurred to me that the companies that are putting the least amount of money into AI crap will do much better over the next few years. So I asked confer.to if it could give me “a list of large corporations ordered by how much they talk about AI in their reports to investors.”
Sadly, the answer was no:
Based on the available data, I can’t provide a specific ranked list of large corporations ordered by how much they discuss AI in their earnings reports. While the information confirms that AI is a major topic in 2025 earnings calls across the S&P 500, the detailed company-by-company ranking you’re looking for isn’t publicly available in the sources I checked.
Here’s what we do know:
AI is a dominant topic: In Q3 2025, “AI” was cited on 306 S&P 500 earnings calls, the highest number in the past decade.
Significant increase: This is a large jump from the five-year average of 136 mentions and the 10-year average of 86.
Sector leaders: The Information Technology (95%) and Communication Services (95%) sectors have the highest percentage of companies mentioning AI.
Market impact: Companies that mentioned AI in their Q3 2025 calls saw a higher average stock price increase (13.9%) compared to those that didn’t (5.7%) since the end of 2024.
While sources like Yahoo Finance and MacroMicro track the overall trend and volume of AI mentions, they don’t publish a specific, ordered list of corporations ranked by the frequency or depth of their AI discussions in earnings reports.
This type of granular, ranked data is typically compiled by specialized financial data and analytics firms (like FactSet, which provided the data to Yahoo Finance) and is often part of a paid subscription service.
Perhaps the specialized AI tool that this broker uses to create its imaginary indices has access to the fine-grained data about AI mentions in earnings calls with investors. But I don’t care enough to go to the trouble of looking.
Poking around at the St. Louis Fed’s Fred graphing tool (to come up with a graphic to include for this post), though, led me to the graph at the top, which is of the “Nasdaq Global Artificial Intelligence and Big Data Index,” which “is designed to track the performance of companies engaged in the following themes: Deep Learning, NLP, Image Recognition, Speech Recognition & Chatbots, Cloud Computing, Cybersecurity and Big Data.”
So one option to get what I want would be to just go short on that index.
Turns out Cory Doctorow and I think a lot alike about the AI bubble, but he also has stuff to say about how to speed along the popping of the bubble, which would be a good thing. (Bubbles that pop sooner do less damage when they do.)
so I’m going to explain what I think about AI and how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.”
My father was great. This post isn’t really about all that, though. It’s about one (or two) specific things my dad did that have proven to be very beneficial to me.
One was that my dad was big on looking at things. I assume this mostly came from his being an ornithologist, which to a great extent involves looking at little tiny things some distance away.
He was always encouraging me to look for and look at things in the distance. On long car trips he’d often encourage me to watch for things like the water towers with the names of each town we were approaching. I’m sure part of that was just to keep me occupied with something other than complaining about being in the car, but part of it was getting me good at watching for things coming over the horizon, a skill that has proven itself of great value, even though I’m not a fighter pilot, or a lookout in a ship’s crows nest.
The other thing, closely related, was my father’s enthusiasm for praising specific things, of which this was one. Anytime I’d spot something early—especially if it was earlier than he did—he’d say, “Good eye!” He did that a lot when I was a boy, but he never really stopped. I remember just a few years before he died, I spotted a Hooded Warbler outside the house where he was living in Kalamazoo and drew a “Good eye!”
Even though I don’t have kids, I try to do this with other folks around me. A little praise never hurt anyone, and being able to spot things in the distance is always useful.
See the horse in the picture at the top? Maybe this will help a little:
Back in May, I wrote an article about AI journaling. The idea (which I had stolen from some YouTuber) was that you write your journal entries as a brain dump—just lists of stuff—into an LLM, and then ask the LLM to do it’s thing.
. . . ask the LLM to organize those lists: Give me a list of things to do today. Give me a list of blind spots I haven’t been thinking of. Suggest a plan of action for addressing my issues. Tell me if there’s any easy way to solve multiple problems with a single action.
Now, I think it’s very unlikely that an LLM is going to come up with anything genuinely insightful in response to these prompts. But here’s the thing: Your journal isn’t going to either. The value of journaling is that you’re regularly thinking about this stuff, and you’re giving yourself a chance to deal with your stresses in a compartmented way that makes them less likely to spill over into areas of your life where they’re more likely to be harmful.
I still think that’s all true, and I still think an LLM might be a useful journaling tool. My main concern had to do with privacy. I didn’t want to provide some corporation’s LLM with all my hopes, dreams, fears, and best ideas, and hope that none of that data would be misused. I mean, bad enough if it was just subsumed into the LLMs innards and used as a tiny bit of new training data. Much worse if it was used to profile me, so that the AI firm could use my ramblings about my cares as an entry way into selling me crap. (And you know that selling you crap is going to be phase two of LLM deployment. Phase three is going to be convincing you to advocate and vote for the AI firm’s preferred political positions.)
Anyway, I figured it wouldn’t be long before local LLMs (where I’d actually be in control of where the data went) would be good enough to do this stuff, and I was willing to wait.
But I didn’t even have to wait that long! A couple of days ago, I saw an article in Ars Technica describing how Moxie Marlinspike of Signal fame had jumped out ahead with a really practical tool: confer.to. It’s a privacy-first AI tool built so that your conversation with the LLM is end-to-end encrypted in a way that keeps your conversation genuinely private.
I’ve started using it for journaling exactly as I described. Because of the way the privacy is inherent to Confer, I can’t actually keep my journal within Confer—all the content is lost when I end the session. So, I’m keeping the journal entries in Obsidian, and then copying each entry into Confer when I’m ready to get its take on what I’ve written.
[Updated 2026-01-20: This turns out not to be true. Conversations in Confer do last through browser restarts. Until I delete the key for that session, I can go back and see everything that was in that session.]
I wanted some sort of graphic for the post, and asked Confer to suggest something. It came up with 5 ideas, including this one, which (bonus) actually illustrates my process:
Anyway, I’ve already written three journal entries that I otherwise wouldn’t have, and gotten some mildly entertaining commentary on them—some of which may rise to the level of useful. We’ll see.
(Asked to comment on a previous draft of this post, Confer.to mentioned the “Give me a list of blind spots I haven’t been thinking of,” prompt above, and said, “But LLMs can’t actually know your blind spots — they can only reflect patterns in what you’ve said.” Which I know. And so, of course, once I started using an actual AI tool instead of just an imagined one, that ended up not being something I asked for.)
If I keep doing this (and I think I will), I’ll follow up with more stories from the AI-enhanced journaling trenches.
Next weekend is going to be pretty cold in Minneapolis. Maybe cold enough to convince some ICE goons that they’d be better off on disability in Kentucky.
I mean, every ICE goon has probably slipped on the ice at least once. Probably every one of those falls could be turned into a disability claim.
I am (just barely) old enough to remember the Black Panthers in the 1960s, when a group of black people tried to carry legal firearms to protect themselves, before they were mostly murdered by the police, the FBI, and one another.
I also remember the 1980s, when the NRA was trying to convince all marginalized groups (blacks, women, lesbians, gays, socialists, etc.), that arming themselves was a great idea. The NRA was sincere, I think—they just wanted more people to have guns.
Most people, especially black people, were well aware of the fact that walking around armed would make it much more likely that they’d be killed by the police. (They remembered what happened to the Black Panthers, presumably better than I did.)
Over the last couple of years, and especially over the last few days, I think perspectives are changing. First, a lot of white people are walking around armed, and even killing people, with minimal consequences. Second, the increasingly fascist police have been killing unarmed people at increasing rates, and looking like they’ll not only get away with it, but looking like they’re glorying in it.
This article, which had a really annoying headline, turns out to have some really great thinking.
In particular, the political perspective it is describing has more than a little overlap with the stuff I was writing about in my articles at Wise Bread.
An economic vision that … encompasses antimonopoly policies, right to repair and regulatory changes to smooth the path for people to start businesses, buy and work land, even build their own houses and invent things.
Steven suggested that I should revisit my Wise Bread posts. There’s a lot of useful stuff there. It was stuff that had seemed a bit less relevant over the last few years (I started writing in June of 2007, right at the start of the Great Financial Crisis, and carried on for 10 years.) But with government having gone all-in on fascism, racism, and gangsterism this year, a lot of those themes are feeling much more on point than they had for a while.
So I think I’ll do that. A lot of my Wise Bread posts still feel just right. On a few, my perspective has changed a bit. I’ll write some new posts to talk about what’s changed.