By Jinlu Wang
There’s a moment every builder knows. You’ve shipped the thing. It’s live. Real people are using it. And then it breaks in a way you never designed for, at the worst possible time, in front of everyone.
Mine was a duplicate alert loop during a live trading session. Forty minutes of my bot firing the same wrong signal while my members watched, and the market moved without us. I was in the logs, my phone was blowing up, and I had the specific sick feeling of someone whose confidence has just been stress-tested in public.
I fixed it. Rebuilt the deduplication logic, added safeguards I should have put in from the start. But those forty minutes are the reason I don’t trust any AI pitch that doesn’t account for failure.
I’ve spent the past year building automated trading systems and web applications -a dual-timeframe swing bot, a pivot-level tracker, an options flow monitor, and dashboards that pull live brokerage data and track dividend recovery across multiple accounts. None of it is academic. These systems run continuously on cloud infrastructure, and the members of my paid trading community use them to make real decisions in real markets. When something breaks, I hear about it immediately. That feedback loop has taught me more about AI in finance than anything else I’ve encountered.
I’m sharing this because I’m about to say some things about AI and investing that will sound skeptical, and I want to be clear: the skepticism comes from the inside, not the outside.
—
The fintech industry has a problem nobody wants to say plainly: most of what’s currently being sold as “AI-powered” is a model wrapper bolted onto a product that existed before, marketed to investors who are understandably eager to find the right horse in this race.
I understand why it happens. The pressure to speak the language of the moment is real, and the language of the moment is AI. But when you’ve spent months building systems that fail unpredictably, fixing them at 11 pm, and rebuilding them better, your tolerance for vague capability claims drops to zero.
The question I ask about any fintech company claiming an AI edge is no longer “what can your AI do?” It’s “what happens when it’s wrong?” Because it will be wrong. In my experience, how a company answers that second question tells you almost everything about whether you’re looking at a real business or an expensive experiment dressed up for a fundraiser.
Most can’t answer it cleanly. That gap is where a significant amount of the current mispricing lives.
—
The jump from a bot that alerts you to a system that reasons, plans, and acts is not a software upgrade. It’s a different problem entirely.
I’m currently building toward that -an agentic system designed to work across multiple data sources and execute without me in the loop. The process has been humbling. You’re asking the system to handle ambiguity at every step, to make judgment calls in sequences where one wrong assumption compounds through everything downstream, and to fail in ways that are recoverable rather than catastrophic. In a trading context, that last requirement is the whole game. Markets don’t pause because your agent made a wrong assumption at step two.
What I keep learning is that the companies that will actually win in agentic AI are solving a reliability problem, not a capability problem. Reliability doesn’t demo well. It doesn’t make headlines. But a system that behaves predictably under conditions nobody anticipated is worth more than one that performs brilliantly in controlled environments -and the gap between those two things is where most AI projects currently live.
This shapes how I evaluate companies in this space. An impressive demo is not the signal. The boring, unglamorous work of engineering for failure -that’s the signal. And it’s genuinely hard to see from the outside.
—
Here’s my investment view, stated plainly.
The application layer of AI is exciting and nearly impossible to underwrite with confidence at current valuations. The space moves too fast, competitive advantages compress too quickly, and the half-life of any specific product edge is short enough to make long-term positioning feel more like speculation than investing.
The infrastructure underneath is a different conversation.
I’ve been researching optical infrastructure extensively -the companies building transceivers and coherent technology that physically connect AI data centers at the speeds these workloads require. These aren’t household names. They don’t have consumer products. But hyperscalers cannot build without them, and this buildout cycle has years of runway remaining.
The same logic extends to energy. The data centers being planned and funded right now need reliable baseload power at a scale that has quietly made nuclear a serious investment conversation again -not for ideological reasons, but purely practical ones. I’ve tracked that theme developing for over a year. It isn’t a consensus yet. That’s the point.
The investors who find real returns in this cycle won’t be the ones who moved fastest on the most visible names. They’ll be the ones who asked what those names couldn’t exist without and positioned themselves there instead.
—
What am I expecting while covering at HumanX 2026 in San Francisco?
The most important conversations in AI don’t happen on stage; they happen between people who are actually building these systems, talking to each other without the performance layer that comes with a keynote slot. The reliability problem in agentic AI, the real economics of AI infrastructure, the tension between long-term inevitability and short-term valuation chaos, these are exactly the questions I’m working through in my own builds, and they’re the questions my audience is asking me.
There is no shortage of AI coverage, that describes what’s happening. There’s a real shortage of coverage written by someone who has also been in the logs at midnight fixing a broken system before markets open.
That’s the perspective I’d bring to this event. And it’s the perspective I think is missing from most of what gets published about AI and investing right now.
—
*Jinlu Wang is an AI Editorial Strategist with Ubiq Broadcasting Corp and builds automated trading systems and web applications for financial markets. She runs Harp’s Trading, a paid investment research and trading community, and publishes institutional-style research covering AI infrastructure, energy, and commodity-linked technology themes. *





