Forget Forward Deployed Engineers

23 December 2025

Finsights

Who should adapt, the Finance professional or their AI agent?

In October, Finster AI’s CEO Sid Jayakumar wrote:

“-- the whole promise of AI is that the AI should learn and adapt! You can't say "agent" and then say "FDE" in the same sentence. If your AI native tool needs someone to write code to make it fit for purpose, then it's not fit for purpose or AI native.”

We continue to see this in finance: many commercially available and internally built AI solutions still depend on human effort to function as intended. That effort may come in the form of forward‑deployed engineers, “AI strategists,” prompt engineers, or integration teams who spend significant time wiring systems together, building custom workflows, and maintaining bespoke logic for each desk or business unit.

There’s an argument for a more scalable approach. AI systems should fit into existing workflows, respect institutional controls, and reduce the need for ongoing customization so that value is quickly evident and continues to compound over time.

The practical issue with “FDE‑dependent” AI in finance

It’s not that forward‑deployed engineers are “bad,” or that bespoke work is never justified. In complex environments, some degree of configuration is expected.

The problem is what happens when customization becomes the operating model:

  • Each desk or region requires a new round of bespoke work.

  • Exceptions and edge cases accumulate faster than they can be engineered away.

  • The AI works best only where a dedicated human is embedded to keep it on track.

  • Teams experience integration fatigue before they see productivity gains.

In financial institutions where workflows, control frameworks, and entitlements vary across business lines this dynamic can be especially pronounced. The result is often a system that can deliver value but struggles to scale consistently across the entire org.

A more realistic standard for AI in Finance: the agent should adapt to the workflow

The closer an AI system gets to “drop‑in contribution,” the more likely it is to scale.

In practice, that means the system should:

  • Fit into existing processes without requiring any organizational redesign.

  • Learn the local context (data, terminology, precedent) without extensive manual re‑engineering.

  • Improve with use without requiring a constant stream of custom code and prompt rewrites.

  • Operate safely inside the institution’s security and compliance boundaries.

The goal isn’t zero configuration. The goal is to avoid a world where every incremental use case requires another embedded engineer.

The blueprint for AI that minimizes forward‑deployed engineering

Below is a set of requirements that, taken together, tends to reduce upfront effort and increase scalability.

1) Guardrails and verifiability

In finance, outputs need to be defensible, not just plausible.

A strong agent should:

  • Understand what data it is allowed to use (and what it must avoid).

  • Provide citations or traceability for key figures and claims.

  • Make assumptions explicit, especially when data is incomplete.

  • Fail safely (e.g., ask clarifying questions, narrow scope, or defer) rather than guessing.

This builds trust, and trust is what drives adoption.

2) Seamless integration into internal systems
Integration shouldn’t be a months‑long prerequisite for value creation.

A better AI agent can connect to the systems where work happens… documents, research repositories, internal data platforms, and approved third‑party data through standardized connectors and well‑defined interfaces. The goal is to reduce the “one‑off” integration burden that typically lands on engineering teams.

Put differently: integration effort should be measured in days, not quarters for common workflows.

3) Immediate use of your institutional context

External market knowledge is table stakes. Real usefulness comes from context:

  • Your prior work and precedent materials

  • Internal models, assumptions, and templates

  • Notes, memos, and institutional viewpoints

  • Firm‑specific taxonomies and terminology

An agent that can’t incorporate proprietary context tends to remain generic and generic systems often struggle to earn high usage in finance.

4) Permission‑aware from day one

Entitlements aren’t optional in financial institutions.

A deployable agent should be:

  • Identity‑aware (who the user is)

  • Role‑aware (what they do and what they’re allowed to access)

  • Context‑aware (what’s appropriate for the task & role)

  • Audit‑ready (what was accessed, when, and by whom)

5) Security and compliance as a foundation

Security can’t be a bolt‑on. It needs to be built in from the beginning:

  • Clear data boundaries and controlled data flows

  • Deployment options that match institutional requirements

  • Strong audit trails and administrative controls

  • A posture the vendor can explain plainly and verify operationally

When this is done well, security becomes an enabler of scale rather than a recurring blocker.

More than “Does it work in the demo?”

A more useful evaluation question is:  Does the system keep working and expanding without requiring a growing number of humans to maintain it?

If meaningful value depends on an ongoing cycle of:

  • bespoke prompt engineering,

  • custom code per team,

  • manual exception handling,

  • and continuous integration work,

…then the organization may be buying a solution that behaves more like a services engagement than a scalable product.

Conclusion

Forward‑deployed engineers can accelerate early deployments, especially in complex environments. But when an “AI agent” requires sustained human involvement to remain useful across teams and workflows, it’s worth reassessing whether the system is truly designed to scale in finance.

The most durable AI systems will be the ones that:

  • adapt to existing workflows while respecting controls and permissions

  • integrate cleanly into institutional systems and content

  • and compound in value with use without constant bespoke effort.

If you’re aiming to be genuinely AI‑native where the agent does more of the heavy lifting as adoption expands, Finster is built for that direction. Reach out to get a demo if you are ready to be AI native.

Are you ready to be AI native?

See how Finster can support your team with vastly accelerated investment research.

Are you ready to be AI native?

See how Finster can support your team with vastly accelerated investment research.

Are you ready to be AI native?

See how Finster can support your team with vastly accelerated investment research.

Are you ready to be AI native?

See how Finster can support your team with vastly accelerated investment research.

Are you ready to be AI native?

See how Finster can support your team with vastly accelerated investment research.

Finsights

All rights reserved.

© 2025

All rights reserved.

© 2025

All rights reserved.

© 2025

All rights reserved.

© 2025

All rights reserved.

© 2025