13 Comments
User's avatar
ToxSec's avatar

“Of course, this list might change and evolve. You might eventually need an LLM provider or a specific compute provider.”

the nice thing about staying subscribed, we get the updates!! great post thx

Paul Iusztin's avatar

yes! That's why AI is so hard, the game changes so fast, or is it? Ultimately, if we stick to a boring tech stack you realize you don't need many tools out there

ToxSec's avatar

yeah, there’s only a few tools release that really change the game. but riding the hype wave is fun i admit.

Paul Iusztin's avatar

Haha, yes, sometimes it is! 😂 i admit

Meenakshi NavamaniAvadaiappan's avatar

Thanks for the good 😊

Priya's avatar

It's really nice to see an article that presents a balanced, non-frenzied view of what really works in production. Question though, what about the actual Agent itself? You've recommended using Langchain as a utility framework, but are you also using for the Agent or spinning it up in a different way? FastMCP etc. is a really good way to get the MCP up. I noticed though that things like Codex CLI struggled with accessing the MCP server over Http (not that it's a regular use case for enterprise systems). Very interesting to learn about DBOS and Prefect - I've played with Temporal in the past, but the others are new, and I'll be sure to check them out.

Paul Iusztin's avatar

DBOS is new. Prefect is not that new, they are in the data engineering field for 5+ years

Well, building a ReAct agent or a Plan-and-Execute agent from scratch is not that complicated. So you can easily implement it on your own end, which allows you to easily give it your own custom needs.

Re FastMCP if you need to expose your app as an MCP server is an amazing framework, but in our use case we where only MCP consumers. We didn't plan to expose our app as an MCP server.

Btw, FastMCP is made by the same team behind Prefect.

Priya's avatar

Hi Paul, I'm trying to create a sample RAG project that adheres as much to the principles in this article as possible. Do you have an opinion on the use of LCEL vs "manually" stringing together the retrieval with the prompt plus LLM? I'm wondering which way this article would lean. I know Opik provides traceability for Langchain etc. so not sure if that would be a criteria?

Priya's avatar

Thank you, this is very helpful!

Ilona Brinkmeier's avatar

That is so true, but unfortunately if you are having a management following the hype and vibe-coding they don't understand and think you are old-fashioned, not being innovative.

Paul Iusztin's avatar

agree! I am lucky enough to work in a team full of tech people that understand this (after we had our bumps as well, haha)

Neural Foundry's avatar

Solid advice! The part about AI frameworks being better as utility libraries than architectural dictators really hit home. We had the exact same experince at my company—spent weeks debugging LangChain abstractions instead of shipping features. The "boring tech" approach with pgvector extensions on Postgres is underrated; I dunno why people rush to specialized vector DBs when Postgres handles 90% of use cases perfectly fine. One thing Id add: monitoring becomes even more critical when you strip away framework guardrails, so investing early in that LLMOps layer pays off big time.

Paul Iusztin's avatar

100% on the LLMOps thing, still, it's hard to push these features from the perspective of a product manager or owner 😂