Linkedin    Twitter   Facebook

Get Started
Log In

Linkedin

Articles Tagged with: Gen AI

AI Isn’t Coming to Structured Finance. It’s Already Here.

At SFVegas 2026, RiskSpan had a front-row seat to one of the most consequential conversations happening in our industry right now. I moderated a session on Agentic AI and the Securitization Lifecycle, while our CEO, Bernadette Kogler, participated in a panel on AI applications in structured finance. Across both rooms, the message was the same: we are done experimenting. We are redesigning workflows. 

The distinction that framed everything was simple but important. Copilots assist. Agents orchestrate. We are moving from AI that enhances what humans do to AI that executes complex, multi-step workflows — with humans supervising the process rather than driving every step. That shift changes everything about how we think about structured finance operations. 

And structured finance, it turns out, is almost perfectly suited for this moment. It is document-heavy, multi-party, process-driven, and data-intensive. From origination through surveillance, the lifecycle is fundamentally workflow-based. Panelists across both sessions shared real examples already in production: legal document review compressed from days to hours, AI-powered loan tape scrubbing before cash flow calculations, prompt-driven scenario generation replacing manual model configuration, and surveillance scaled across hundreds of deals per month. One striking observation: the future interface for structured finance may not be a UI at all — it may be entirely driven by prompts. 

But the panels were equally clear-eyed about what has to come first. Sixty-five percent of financial services firms are actively using AI, yet only 13% have deployed it across production processes. The gap between those two numbers is largely a data problem. The most sophisticated AI cannot overcome poor inputs, inconsistent loan tapes, or legacy system constraints. Firms that want to lead need to fix the foundation before layering on orchestration. 

Explainability matters just as much. With the EU AI Act and US fair lending enforcement raising the stakes, auditable, transparent models are not optional. And governance is shifting from “human in the loop” to “human over the loop” — a subtle but meaningful difference that requires defined accountability, model drift monitoring, and clear operational guardrails. 

The one-year outlook from both panels was notably concrete. Expect AI agents managing defined surveillance workflows, deeper cross-platform integration, and a sharper divide between early movers and everyone else. The competitive advantage will go to firms that clean their data, build explainability into their models from day one, and embed AI into operations — not just into pilot programs. 

Structured finance has always rewarded process discipline and deep domain expertise. That doesn’t change in an agentic world — if anything, it becomes more critical. The quality of an AI agent’s output is only as good as the prompts and parameters guiding it, and designing those well requires people who understand cash flow waterfalls, covenant structures, and credit risk at a fundamental level. Agentic AI doesn’t replace that expertise. It amplifies it. The firms that understand that will define the next chapter. 

At RiskSpan, we have spent years building at exactly this intersection — combining deep structured finance domain knowledge with purpose-built analytics infrastructure. That foundation is what makes it possible to deploy AI that actually works in production, not just in demos. The opportunity in front of our clients right now is significant, and we are focused on helping them capture it. 


Why AI Won’t Kill Asset-Backed Finance Software — and Why the Last Mile is the Moat

Every wave of financial technology innovation brings the same prediction: software will be commoditized. Today, that prediction is being applied to AI. If AI models can reason, summarize, and generate code, the thinking goes, B2B vertical SaaS becomes unnecessary. 

That conclusion is inherently wrong. ABF platforms are not feature layers, they are governed systems.  

The last mile of AI deployment isn’t friction—it’s the moat. 

ABF Is Not a “Promptable” Problem 

ABF platforms sit directly in the flow of capital allocation, risk management, and regulation. For asset managers deploying institutional capital, this creates a very high bar for reliable data, validated models and domain-specific workflow.  

The real question isn’t whether a system can produce answers. It’s whether it can produce results that are: 

  • Consistent over reporting periods and market cycles 
  • Explainable under stress and investor scrutiny 
  • Defensible and robust enough for LPs, investment committees, and regulators 

That high bar changes everything. It explains why technology adoption in financial markets moves cautiously and why legacy systems persist. These systems embed decision rights, controls, and institutional logic that can’t simply be recreated with better prompts. Any platform that ignores this reality will struggle to scale beyond pilots. 

Which leads to the obvious question: if AI is so powerful, where does it actually help? 

AI Accelerates Workflow — Not Accountability 

Applied correctly, AI can materially improve ABF workflows. It can ingest complex credit agreements faster, reconcile data across counterparties, flag covenant breaches, and reduce manual reporting work. In other words, AI increases operational leverage. 

But AI does not remove the need for explicit deployment configuration and governance. Institutions still must define who owns key assumptions, which decisions can be automated, and where accountability sits when outcomes affect capital. These embedded design choices (not prompts) ultimately determine whether a platform is trusted. 

AI compresses timelines, but responsibility remains fixed. Once this distinction is recognized, the broader implication becomes clear: AI does not eliminate the need for software. It raises the bar for it. 

Software Remains the System of Record 

The idea that AI replaces SaaS also misunderstands where SaaS enterprise value lives. Enterprise value in ABF doesn’t live in isolated insights. It lives in controlled systems of record and durable platforms that provide: 

  • Governed data and the system of record 
  • Embedded domain expertise 
  • Repeatable processes that survive personnel turnover 
  • A shared source of truth across counterparties, investment, risk, accounting, and investor relations 

AI without software discipline creates speed without stability. With it, AI becomes force-multiplying. The question, then, is what separates platforms that successfully integrate AI from those that don’t. 

The Real Differentiator: Deployment Intelligence at Scale 

What separates enduring platforms from feature-rich tools is not model sophistication—it’s deployment intelligence — the ability to integrate AI into live production environments without weakening controls. That requires: 

  • Controlled data pipelines designed for real-world imperfections 
  • Configuration layers that adapt to fund-specific structures without breaking controls 
  • AI outputs that are transparent, and auditable 
  • Implementation treated as a repeatable product, not bespoke services 

This is where defensibility emerges. Deployment intelligence compounds with each client rollout. Each successful implementation strengthens the next, deepening institutional trust and operational resilience. AI amplifies this flywheel but cannot replace it. 

The Mispriced Risk of “AI-Only” Narratives 

In private credit, trust is earned slowly and lost quickly. It is built through consistent valuations, defensible reporting, and reliability during market dislocations. 

A system that produces faster answers, but weaker confidence does not displace incumbents. It increases operational and reputational risk. Investors should be wary of platforms that promise instant replacement without acknowledging institutional reality of fiduciary-grade infrastructure. 

The Investment Takeaway 

AI is not commoditizing ABF software solutions. It is widening the moat for platforms that integrate AI responsibly into governed systems. 

The next phase of growth for category leaders such as RiskSpan will be driven by combining deep domain knowledge with AI-native architecture. Leaders will treat the last mile – data integration, workflow configuration, and control design — as a core product capability, not an implementation afterthought. 

In markets where trillions in capital allocation depend on data integrity and institutional trust, the last mile isn’t an implementation detail. 

It’s the moat. 


The Data Model That Powers Private ABF: Why Purpose-Built Architecture Changes Everything 

Private asset-backed securities don’t follow the same rules as public securitizations. The structures are more diverse. The triggers are more nuanced. The collateral is more diverse. And yet, most market participants still try to manage these instruments with tools designed for a different world entirely—or worse, with disconnected spreadsheets that multiply risk with every manual handoff. 

RiskSpan’s Private ABF platform was built specifically to solve this problem. Not as an adaptation of existing tools, but as a purpose-built data architecture designed from the ground up for the unique demands of private structured finance. The difference isn’t incremental—it’s foundational. 

Built for the Full Lifecycle 

RiskSpan’s Private ABF platform isn’t just a database—it’s a relational architecture where every entity maintains its identity and relationships across the entire deal lifecycle. The platform connects deals to their tranches, tranches to their collateral, collateral to individual loans, and all of these to the triggers, waterfalls, fees, and reserves that govern cash flow distribution. 

This matters because private ABS transactions aren’t static. Collateral performs. Triggers trip. Waterfalls redirect. Reserves build and release. A platform that can’t maintain these relationships in real time isn’t managing deals—it’s creating snapshots that are stale before they’re finished. 

Loan-Level Depth That Powers Real Analysis 

At the heart of any ABS transaction is the underlying collateral. RiskSpan’s Private ABF platform maintains comprehensive loan-level data with over 200 attributes per asset, tracking everything from origination characteristics through current performance status. This includes credit metrics like FICO scores and debt-to-income ratios, property and collateral details, payment history and delinquency tracking, modification and loss mitigation status, and ARM reset schedules and rate mechanics. 

The platform currently manages nearly half a billion loan records across active transactions. Each loan maintains its full history—not just current state, but the complete trajectory that informs forward-looking projections. When you run a cash flow model, you’re not working from aggregated pool statistics. You’re working from the actual loans, with their actual characteristics, generating projections that reflect real collateral behavior. 

Depth for Esoteric Structures 

RiskSpan’s Private ABF platform currently supports over several collateral types and structures. Each asset class has its own performance characteristics, prepayment behaviors, and loss dynamics. The platform’s architecture accommodates this diversity without forcing artificial standardization. 

But diversity in collateral is only part of the challenge. Private ABS triggers represent some of the most complex conditional logic in structured finance. Unlike standardized agency delinquency tests, private deal triggers can involve multi-step calculations with lookback periods, cure provisions that allow temporary breaches, step-rate adjustments that phase in over time, and early amortization events with distinct severity levels. 

The platform models triggers as executable logic, not static documentation. When a trigger breaches, the system knows what happens next—which waterfall priorities shift, which reserve requirements change, which reporting obligations activate. This is the deal’s immune system, and it needs to function in real time. 

Time-Series Without the Chaos 

Every entity in RiskSpan’s Private ABF platform exists in time. The platform maintains separate performance histories for tranches, collateral pools, fees, and reserves—each keyed by reporting date to enable point-in-time reconstruction of any deal state. 

This architecture solves one of the most persistent problems in private ABF operations: answering the question “what did we know, and when did we know it?” Whether for regulatory compliance, investor reporting, or internal risk management, the ability to reconstruct historical deal states isn’t a luxury—it’s a requirement that spreadsheet-based approaches simply cannot meet reliably. 

The Waterfall as Working Code 

Cash flow waterfalls in private ABF can run to dozens of steps with conditional branches, pro-rata splits, and priority reversions. RiskSpan’s Private ABF platform models these waterfalls as executable payment sequences—not flowcharts, but actual logic that routes cash from sources to destinations based on current deal state. 

Each waterfall step defines its priority in the payment sequence, its source of funds and destination, its allocation basis (pro-rata, sequential, or targeted), and the conditions under which it activates or suspends. 

When combined with the platform’s cash flow engine, these waterfall definitions become working models. You can project payments under any scenario, stress collateral performance, and see exactly how cash moves through the structure period by period. 

An Architecture Built for AI 

The same architectural principles that make RiskSpan’s Private ABF platform effective for traditional analytics make it exceptionally well-suited for artificial intelligence and machine learning applications. This isn’t a coincidence—it’s a consequence of building a data model that prioritizes structure, relationships, and semantic clarity. 

AI systems thrive on clean, well-organized data with explicit relationships. RiskSpan’s Private ABF platform delivers exactly this: normalized entities with consistent identifiers, clear hierarchies from deals down to individual loans, and temporal versioning that distinguishes current state from historical snapshots. When an AI model needs to understand a transaction, it doesn’t have to infer structure from unstructured sources—the relationships are already defined and traversable. 

The platform’s semantic richness enables natural language interfaces that actually work. Because every field has meaning within a consistent schema, AI can translate questions like “show me deals where the senior OC trigger is within 50 basis points of breach” into precise queries without ambiguity. The loan-level depth means AI models can identify patterns across hundreds of millions of records—finding correlations between origination characteristics and performance outcomes that would be invisible to traditional analysis. 

Time-series architecture is particularly critical for AI applications. Machine learning models for credit risk, prepayment prediction, and loss forecasting require historical sequences, not point-in-time snapshots. RiskSpan’s Private ABF platform maintains this temporal context natively, enabling training datasets that capture how loans and deals evolve over their lifecycles. 

RiskSpan is already deploying AI capabilities on this foundation: automated anomaly detection that flags unusual performance patterns, intelligent document extraction that populates deal records from offering documents, natural language querying that makes complex analytics accessible to non-technical users, and predictive models that leverage the full depth of loan-level history. The platform doesn’t just store data—it organizes knowledge in a form that AI can reason about. 

Designed for the Enterprise 

RiskSpan’s Private ABF platform operates as a multi-tenant platform with granular access controls. Every record carries client and user identifiers that enable sophisticated permission model—issuers see their deals, investors see their positions, servicers see their portfolios, and rating agencies see what they’re authorized to review. 

This isn’t just about security (though it is that). It’s about enabling collaboration across the structured finance ecosystem without compromising confidentiality. A single platform can serve all participants in a transaction, each with their appropriate view of the data. 

Built for What Comes Next 

Private ABS is an evolving market. New asset classes emerge. New structures get tested. New regulatory requirements arrive. RiskSpan’s Private ABF platform accommodates this evolution through flexible schema design that allows custom attributes without requiring database modifications. 

When a client brings a novel structure—say, a securitization of an asset class we haven’t seen before—the platform can ingest and model it without waiting for a software release. This extensibility is what allows a platform to stay current with market innovation rather than constantly playing catch-up. 

What This Means in Practice 

The architectural decisions in RiskSpan’s Private ABF platform translate directly to operational capabilities. For deal structuring, you can model waterfall variations and see their impact on tranche economics before going to market. For pricing, you can run scenarios against actual loan-level collateral, not simplified pool assumptions. For risk management, you can monitor trigger proximity and project breach timing under stress. For surveillance, you can track every metric that matters, with full audit trails and historical reconstruction. 

Private ABF deserves purpose-built infrastructure. RiskSpan’s Private ABF platform delivers it. 


What a Year of Building AI in Structured Finance Actually Taught Us 

The lessons nobody puts in the demo. 

In 2025, our team built production AI systems that process billions of performance records for tens of millions of mortgages, develop cash flow models for complex private ABF structures directly from documents, and connect large language models directly to bond analytics APIs. 

We built dashboards, connectors, and credit analytics. Some of them worked. Some of them taught us more by failing. 

This is what we learned—not the polished conference talk version, but the notes we’d share with a peer team starting the same journey. 

The Value Shift Nobody Prepares You For 

A portfolio delinquency analysis that used to take three hours now takes twenty minutes. 

That sounds like a win. It is a win. But it also raises a question that’s harder to answer than any technical problem we solved this year: 

If AI handles in minutes what took us hours, what are we contributing? 

When we started pulling this thread, we realized that a significant portion of what felt like skilled analytical work was actually mechanical labor—data extraction, formatting, applying the same methodology we’d applied dozens of times before. The expertise was real, but it was wrapped in hours of execution that masked how much of the work was routine. 

Here’s where we landed: 

AI handles the “how.” Humans own the “why” and “so what.” 

The value now lives in knowing which questions matter. Understanding what the client really needs versus what they say they need. Recognizing when output is wrong because we understand the domain deeply enough to see the error. 

That’s an entirely different skill set. It requires judgment, contextual awareness, and domain intuition that deepens over years—the kind of expertise AI can’t simply replicate, unlike procedural analytical work.

Not everyone will make this transition comfortably. The analysts who built their identity around being fast and thorough at execution face a harder adjustment than those who always saw execution as a means to an end. 

We don’t have this all figured out yet. But we’ve stopped pretending the shift isn’t happening. 

Stop Asking AI to Write Code—Start Asking It to Think With You 

For years, we used Claude as a coding assistant. “Write a function that does X.” “Convert this data from format A to format B.” “Generate a script that calculates Y.” 

That works. But it captures maybe 20% of the value. 

The shift that changed our results was: treating Claude not as a tool to instruct, but as an analyst to think alongside. 

The difference looks like this: 

Before (instruction mode): 

“Write a Python script to calculate delinquency rates from this loan data.” 

After (thinking partner mode): 

“We need to identify hidden credit risk in this CLO portfolio—issuers that resemble recent defaults but haven’t shown price distress yet. What factors should we consider? What data would we need? Let’s build a scoring model together.” 

That second conversation led to identifying hidden exposure across issuers. Claude suggested factors we hadn’t considered—CLO concentration patterns, industry clustering effects, the relationship between coupon levels and distress signals. We debated the weighting. We refined the methodology. The output was genuinely collaborative. 

The code that emerged from the second approach was better, but that’s almost beside the point. The thinking was better. The model was better. The insight was better. 

This requires a different posture than most of us learned. You have to think out loud. Admit what you don’t know. Explain your reasoning and invite critique. Treat the AI as a colleague who happens to have read every document and doesn’t get tired—not as a sophisticated autocomplete. 

The developers and analysts on our team who made this shift produce substantively different work than those who are still in instruction mode. And the gap is widening. 

The First Version Will Be Wrong—Plan for It 

We built a benchmark analysis comparing a client’s NonQM loan portfolio against the broader market. The analysis looked solid: the portfolio showed a 1.37% delinquency rate advantage versus the universe. Strong results. Ready to present. 

Then someone asked about DSCR loans. 

In NonQM lending, DSCR (debt service coverage ratio) loans are a category unto themselves—with measurably better performance than other NonQM products. When we segmented the data, we discovered the universe was comprised of 43% DSCR loans while the client’s portfolio had only 30% DSCR loans. 

This changed everything. 

The client’s portfolio had less exposure to DSCR loans (the better-performing segment) yet still outperformed the benchmark. That alone was impressive, but our initial analysis understated the true picture. Once we compared performance within segments (DSCR vs. DSCR, non-DSCR vs. non-DSCR), the client’s edge was even larger than we’d initially observed. 

If we had presented the first version, we would have undersold our client’s own performance. The insight that mattered most—superior underwriting across both loan categories—would have been invisible. 

Lesson: “Wrong” doesn’t mean broken. It means the output doesn’t fully reflect reality. Have a domain expert review the work before drawing conclusions. 

Deploying AI Agents for End Users Is a Security Project 

Building an AI agent that works in a demo is straightforward. Deploying that agent in a production UI where real users interact with real data took us months. 

We built an agent that lets users query our bond analytics platform conversationally. The AI worked. Making it production-ready required solving problems: 

Prompt injection: When users can type anything into a text box processed by an LLM, you inherit a new attack surface. We implemented input validation, output filtering, tightly scoped permissions, and logging that captures every agent action for audit. 

Rate limiting: A single conversational turn might trigger 50 API calls. We built tiered limits—per-user, per-session, per-token—plus circuit breakers for runaway queries. 

Session management: Agent sessions need conversational context across multiple turns, isolated per user, with graceful expiration handling and automatic cleanup. 

Audit trails: Regulated industries need to know what the AI did. Every query, tool invocation, and response needs to be logged immutably. 

The agent itself was 20% of the effort. Authentication, authorization, input validation, rate limiting, session management, and security review were the other 80%. 

Lesson: In production, the agent is the easy part. The security wrapper is the product. 

Post script: AgentCore from AWS and Agent Framework from Microsoft are solving the deployment and security headaches.  

AI Is Good at Finding Information But Sometimes Overstates What It Means 

While building the credit risk analysis, we asked Claude to research distressed issuers—companies that had defaulted or were showing signs of stress. We wanted to understand patterns we could use to identify similar risks in the portfolio. 

Claude surfaced real-time signals we wouldn’t have found efficiently on our own: FTC antitrust actions, rating agency downgrades, refinancing walls, fraud allegations. Information that wouldn’t appear in pricing data for months was available in news coverage and regulatory filings. The research phase that would have taken days was completed in hours. 

But we also caught Claude drawing confident conclusions from weak sources. In one case, it attributed claims to “industry reports” that didn’t exist when we followed the links. The search results were real. The sources were ‘real’. But the synthesis drew conclusions the sources didn’t support. 

The lesson: use AI-powered search aggressively. It’s the difference between stale knowledge and current intelligence, especially in fast-moving situations. But verify specific claims. Click the links. Read the actual sources. 

AI is excellent at finding relevant information across large volumes of text. It is sometimes too confident about what that information means when synthesized. The combination of broad retrieval and skeptical verification is more powerful than either alone. 

Your Org Chart Isn’t Ready for This 

Our AI strategy deck included projections: reduction in onboarding costs, increased client capacity and margin expansion. 

The numbers were defensible. The business case was clear. 

What the projections didn’t address: the organizational implications of realizing the promised efficiencies. 

If analysts can serve five times more clients, do you need fewer analysts—or do you pursue five times more clients? If the answer is “more clients,” do you have the sales capacity? The support infrastructure? The management bandwidth? 

If developers now own adoption metrics for the features they build, then what happens to the product managers who previously owned that? Are product managers freed up for more strategic work, or are they defending territory? 

If AI drafts client communications, who reviews them? What error rate are we willing to accept? Who’s accountable when the AI gets something wrong? 

These aren’t hypothetical questions. We’re navigating them now, and the answers aren’t obvious. 

AI doesn’t just improve workflows. It reshapes roles. And most organizations—including ours—are making it up as they go. 

The companies that figure out the organizational design will outperform those that simply purchase better software. The differentiation in 2026 won’t come from adopting AI. It will come from redesigning teams, incentives, and accountability structures around what AI makes newly possible. 

What We’re Taking Into Next Year 

A year of building AI systems in structured finance clarified a few things: 

AI is more powerful than the hype suggests—once you integrate it into real workflows rather than treating it as a research toy. 

AI is more frustrating than the demos show—the gap between “works in claude.ai” and “works in production” is where most of the time goes. 

AI is more dependent on domain expertise than the automation narrative implies—it generates analyses quickly, but distinguishing plausible from accurate requires human judgement that compounds over years. The “why” and “so what” remain stubbornly human problems.  

AI changes more than technology—it changes job descriptions, team structures, and how people understand their own value. The skill isn’t operating the tool; it’s knowing when the output reflects reality. 

We don’t have all the answers. We’re still learning what this means for how we build software, how we serve clients, and how we organize ourselves. 

But we’re no longer wondering whether AI will change our industry. We’re focused on making sure we’re the ones defining how. 


Use Case: RiskSpan’s Agentic AI for MBS Data Tool 

Breaking Down VA vs FHA Prepayment Speeds 

This use case demonstrates how agentic interaction with the MBS Data Tool helps investors move from a general observation (“VA speeds are fast”) to a data-backed, multidimensional explanation tied directly to credit, LTV, and program structure. 


Business Question 

Why are VA loans — particularly those with low FICO and high LTV profiles — prepaying so much faster than historically, and faster than FHA loans with comparable characteristics? 


For the full output of our AI MBS Agent, read on or watch our two-minute demo video:


Workflow with Agentic AI 

1. Baseline Comparison 

Based on the data, I can now provide a comprehensive analysis of VA vs FHA loan prepayment trends since 2020:

Key Findings:

1. Current Prepayment Speeds (as of September 2025):

– VA loans: 11.74% CPR

– FHA loans: 8.68% CPR

– VA loans are currently prepaying about 3 percentage points faster than FHA loans

2. Historical Trends:

2020-2021 (Refinance Wave):

– Both loan types experienced very high prepayment speeds

– Peak CPRs:

* FHA: 38.48% (April 2021)

* VA: ~30-31% (Late 2021)

– VA loans consistently showed higher prepayment sensitivity during this period

2022-2023 (Rising Rate Environment):

– Both loan types saw significant slowdown in prepayment speeds

– Lowest CPRs:

* FHA: 4.42% (February 2023)

* VA: 5.07% (February 2023)

– The spread between VA and FHA prepayment speeds narrowed but remained positive

2024-2025 (Current Environment):

– Moderate prepayment speeds for both loan types

– VA loans maintain faster prepayment speeds

– Recent range:

* FHA: 5-10% CPR

* VA: 7-14% CPR

3. Key Observations:

– VA loans consistently prepay faster than FHA loans across different rate environments

– The spread between VA and FHA prepayment speeds tends to widen in more favorable refinancing environments

– Both loan types show similar seasonal patterns but with different magnitudes

– VA loans show greater prepayment sensitivity to market conditions

4. Portfolio Sizes:

– FHA: Currently ~7.45 million loans with $1.47 trillion in balance

– VA: Currently ~3.77 million loans with $1.03 trillion in balance

This analysis suggests that investors should expect consistently faster prepayment speeds from VA loans compared to FHA loans, particularly during periods of favorable refinancing opportunities. The higher prepayment sensitivity of VA loans likely reflects the generally better credit quality of VA borrowers and their greater ability to refinance when rates are favorable.


2. High-Credit Borrowers 

Based on the analysis of high-FICO (740+) VA and FHA loans, here are the key findings:

1. Current Prepayment Speeds (September 2025):

– High-FICO VA: 9.37% CPR

– High-FICO FHA: 5.96% CPR

– Spread: 3.41% faster for VA

2. Portfolio Characteristics (as of September 2025):

High-FICO VA Loans:

– Average FICO: 779

– Average WAC: 3.85%

– Average CLTV: 58.1%

– Balance: $443.4B (43% of VA portfolio)

– Loan Count: 1.47M loans

High-FICO FHA Loans:

– Average FICO: 769

– Average WAC: 4.55%

– Average CLTV: 58.4%

– Balance: $188.0B (13.8% of FHA portfolio)

– Loan Count: 879K loans

3. Key Trends (2024-2025):

Prepayment Speed Range:

– High-FICO VA: 4.26% to 14.76% CPR

– High-FICO FHA: 3.56% to 8.68% CPR

Seasonal Patterns:

– Both show seasonal increases in spring/summer months

– VA loans show more pronounced seasonal effects

– Peak prepayment typically occurs in October/November

4. Notable Observations:

– The prepayment speed differential between VA and FHA remains significant even at high FICO levels

– High-FICO VA loans have consistently lower WACs (about 70bps lower)

– Similar CLTV profiles between high-FICO VA and FHA

– VA has a much larger proportion of high-FICO borrowers (43% vs 13.8% for FHA)

– High-FICO VA loans show greater prepayment sensitivity to market conditions

5. Recent Trends:

– Both loan types show increasing prepayment speeds in 2025

– The spread between VA and FHA prepayment speeds has remained relatively stable

– High-FICO VA loans maintain faster prepayment speeds even in rising rate environments

This analysis suggests that even when controlling for credit quality (FICO ≥ 740), VA loans consistently prepay faster than FHA loans. This is likely due to:

1. Lower average note rates (WAC)

2. VA’s no-down-payment benefit making it easier to refinance

3. Generally better borrower demographics and income profiles

4. More streamlined refinancing options for VA loans

Investors should expect continued faster prepayment speeds from high-FICO VA loans compared to high-FICO FHA loans, particularly during favorable rate environments or seasonal peaks.


3. Low-Credit Borrowers 

Based on the analysis of low-FICO (<640) VA and FHA loans, here are the key findings:

1. Current Prepayment Speeds (September 2025):

– Low-FICO VA: 21.04% CPR

– Low-FICO FHA: 12.07% CPR

– Spread: 8.97% faster for VA

2. Portfolio Characteristics (as of September 2025):

Low-FICO VA Loans:

– Average FICO: 611 – Average WAC: 4.77%

– Average CLTV: 66.2%

– Average Age: 52.3 months

– Balance: $77.5B (8.3% of VA portfolio)

– Loan Count: 316K loans

Low-FICO FHA Loans:

– Average FICO: 609

– Average WAC: 4.83%

– Average CLTV: 60.4%

– Average Age: 62.8 months

– Balance: $290.7B (21.3% of FHA portfolio)

– Loan Count: 1.49M loans

3. Key Trends (2024-2025):

Prepayment Speed Range:

– Low-FICO VA: 9.08% to 27.54% CPR

– Low-FICO FHA: 6.31% to 12.73% CPR

Notable Patterns:

– Much higher volatility in VA prepayment speeds

– Larger seasonal effects in VA loans

– Consistently wider spread between VA and FHA vs high-FICO segment

4. Key Differences:

Portfolio Composition:

– FHA has a much larger concentration in low-FICO loans (21.3% vs 8.3% for VA)

– FHA’s low-FICO portfolio is nearly 4x larger than VA’s by balance

Performance Characteristics:

– VA loans show significantly higher prepayment speeds despite similar credit profiles

– VA loans exhibit greater prepayment volatility

– VA loans have slightly better average FICO scores within the low-FICO segment

– VA loans have higher CLTVs but lower WACs

5. Recent Trends:

– Both loan types show increasing prepayment speeds in 2025

– VA prepayment speeds are consistently higher and more volatile

– Seasonal patterns are more pronounced in VA loans

– The spread between VA and FHA prepayment speeds has remained wide

6. Notable Observations:

– The prepayment speed differential between VA and FHA is actually larger in the low-FICO segment than in the high-FICO segment

– Low-FICO VA loans show surprisingly high prepayment speeds despite lower credit scores

– FHA loans show more consistent, less volatile prepayment patterns

– VA loans maintain faster speeds despite higher CLTVs

This analysis reveals that the prepayment behavior difference between VA and FHA loans is even more pronounced in the low-FICO segment than in the high-FICO segment. This could be due to:

1. VA’s stronger refinancing program benefits for lower-credit borrowers

2. More flexible VA streamline refinance requirements

3. Better servicing support for VA borrowers

4. Lower average note rates in the VA program

5. Potentially different borrower demographics within the same FICO band

Investors should expect:

– Higher and more volatile prepayment speeds from low-FICO VA loans

– More stable, lower prepayment speeds from low-FICO FHA loans

– Larger seasonal effects in VA loans

– Greater market sensitivity in VA loans


4. High-LTV Loans 

Based on the analysis of high-LTV (≥95%) VA and FHA loans, here are the key findings:

1. Current Prepayment Speeds (September 2025):

– High-LTV VA: 12.37% CPR

– High-LTV FHA: 7.77% CPR

– Spread: 4.60% faster for VA

2. Portfolio Characteristics (as of September 2025):

High-LTV VA Loans:

– Average FICO: 714

– Average WAC: 4.33%

– Average CLTV: 70.9%

– Average Original LTV: 100.3%

– Average Age: 52.9 months

– Balance: $607.2B (61.1% of VA portfolio)

– Loan Count: 2.11M loans

High-LTV FHA Loans:

– Average FICO: 679

– Average WAC: 4.61%

– Average CLTV: 62.9%

– Average Original LTV: 97.6%

– Average Age: 71.7 months

– Balance: $1.04T (72.0% of FHA portfolio)

– Loan Count: 5.08M loans

3. Key Trends (2024-2025):

Prepayment Speed Range:

– High-LTV VA: 5.60% to 19.73% CPR

– High-LTV FHA: 4.50% to 8.89% CPR

Notable Patterns:

– Higher volatility in VA prepayment speeds

– Strong seasonal patterns in VA prepayments

– More stable FHA prepayment patterns

– Consistent spread between VA and FHA

4. Key Differences:

Portfolio Composition:

– Both programs have significant high-LTV concentration

– FHA has larger absolute exposure but similar relative exposure

– VA allows higher original LTV (100%+) compared to FHA

Performance Characteristics:

– VA loans consistently prepay faster despite higher LTVs

– VA borrowers have notably higher FICO scores (+35 points)

– VA loans have lower WACs (-28bps)

– VA loans are significantly younger (-19 months)

5. Recent Trends:

– Both loan types show increasing prepayment speeds in 2025

– VA prepayment speeds remain more volatile and responsive to seasonality

– Peak prepayment periods occur in spring/summer months

– The spread between VA and FHA has remained relatively stable

6. Notable Observations:

– High-LTV concentration is a defining characteristic of both programs

– VA’s higher prepayment speeds persist despite higher LTVs

– Better credit quality of VA borrowers likely contributes to faster speeds

– Younger average loan age in VA portfolio may influence prepayment behavior

– Lower WACs in VA portfolio suggest more efficient refinancing history

Implications for Investors:

1. Portfolio Management:

– Expect faster and more volatile prepayments from high-LTV VA loans

– Consider seasonal patterns in VA prepayments for timing decisions

– Factor in the significant WAC differential when evaluating prepayment risk

2. Risk Considerations:

– Higher prepayment risk in VA despite similar LTV profiles

– Greater market sensitivity in VA portfolio

– More stable, predictable prepayment patterns in FHA

3. Valuation Impact:

– Need to apply larger prepayment speed differentials for VA vs FHA

– Consider stronger seasonal effects in VA valuations

– Account for credit quality differences within high-LTV segment

The analysis suggests that even in the high-LTV segment, VA loans maintain significantly faster prepayment speeds than FHA loans, driven by:

1. Better borrower credit quality

2. Lower note rates

3. More flexible refinancing options

4. Younger loan age

5. More efficient VA refinancing program

These factors appear to overcome the theoretical constraints of high LTV ratios on refinancing ability.


5. Program Features as Drivers

The data strongly supports that program features drive faster VA prepayment speeds. Here’s the detailed analysis:

1. Early Prepayment Behavior (0-24 months):

– VA loans age 13-24 months: 25.29% CPR

– FHA loans age 13-24 months: 17.04% CPR

– The 8.25% difference reflects VA’s more flexible refinancing rules

– No seasoning requirement for VA vs 210 days for FHA

2. Refinance Incentive Response:

– High incentive (>100bp) VA: 50.43% CPR

– High incentive (>100bp) FHA: 26.55% CPR

– The dramatic difference (23.88%) shows VA borrowers can more easily act on rate incentives

– Even medium incentive (0-100bp) shows large gap:

* VA: 29.59% CPR

* FHA: 18.37% CPR

3. Credit Quality Impact:

– VA average FICO: 715-724 across age buckets

– FHA average FICO: 672-681 across age buckets

– Better VA credit quality + easier program rules = faster speeds

4. Cost Structure Effects:

VA Advantages:

– Lower funding fee (0.5%)

– No ongoing MIP

– Can include closing costs

– Results in lower all-in refinancing costs FHA Constraints:

– Higher upfront MIP (1.75%)

– Ongoing annual MIP

– More restricted closing cost inclusion

– Higher total refinancing costs

5. LTV Considerations:

– VA has no maximum LTV for IRRRL

– FHA generally limited to 97.75% LTV

– Current data shows:

* VA high-LTV loans (>95%): 12.37% CPR

* FHA high-LTV loans (>95%): 7.77% CPR

6. Program Flexibility Evidence:

Response to Rate Environment:

– VA loans show stronger reaction to refi incentives

– Higher CPRs across all incentive buckets

– Faster response to market opportunities

Seasoning Effects:

– VA shows earlier prepayment spikes

– More pronounced early-age prepayment behavior

– Less constrained by seasoning rules

7. Servicer Impact:

– VA servicers may be more proactive due to program ease

– Lower documentation requirements reduce operational friction

– Simpler approval process enables faster execution

The data demonstrates that VA’s program features create a more efficient refinancing mechanism:

1. Lower Costs:

– Reduced fees

– No ongoing MIP

– More included costs

2. Fewer Restrictions:

– No LTV limits

– No seasoning requirement

– Limited credit checks

3. Simpler Process:

– Less documentation

– Faster approval

– Lower operational burden

These advantages manifest in:

– Consistently faster prepayment speeds

– Stronger response to rate incentives

– Earlier refinancing activity

– Higher refinancing success rates

For investors, this means:

1. Expect faster VA prepayments across market cycles

2. Anticipate stronger response to refinancing opportunities

3. Factor in program differences when valuing VA vs FHA MBS

4. Consider seasoning rules when evaluating young loans

The combination of program features and data evidence clearly shows that VA’s more flexible refinancing rules are a primary driver of faster prepayment speeds compared to FHA.


6. Scenario Stress Test 

Based on the current portfolio characteristics and historical refinancing behavior, here’s how a 100bps rate drop would likely impact VA vs FHA prepayment speeds:

1. Current Portfolio Rate Distribution:

VA Portfolio:

– 7.5%+ WAC: $4.6B (0.4%)

– 7.0-7.49% WAC: $18.5B (1.8%)

– 6.5-6.99% WAC: $71.1B (6.9%)

– 6.0-6.49% WAC: $105.3B (10.2%)

– 5.5-5.99% WAC: $118.7B (11.5%)

– <5.5% WAC: $713.0B (69.2%)

FHA Portfolio:

– 7.5%+ WAC: $23.6B (1.6%)

– 7.0-7.49% WAC: $68.5B (4.7%)

– 6.5-6.99% WAC: $163.8B (11.2%)

– 6.0-6.49% WAC: $158.4B (10.8%)

– 5.5-5.99% WAC: $149.0B (10.2%)

– <5.5% WAC: $903.5B (61.5%)

2. Projected Prepayment Speeds by Refi Incentive:

Based on current behavior with similar incentives:

High Incentive (>100bps):

– VA: 50-55% CPR

– FHA: 25-27% CPR

– Differential: ~25-28% faster for VA

Medium Incentive (50-100bps):

– VA: 35-40% CPR

– FHA: 20-25% CPR

– Differential: ~15% faster for VA

Low Incentive (0-50bps):

– VA: 25-30% CPR – FHA: 15-18% CPR

– Differential: ~10-12% faster for VA

3. Portfolio Impact of 100bps Rate Drop:

VA Portfolio:

– ~$200B would move to >100bps incentive

– ~$119B would move to 50-100bps incentive

– Weighted average CPR projection: 32-37% CPR

FHA Portfolio:

– ~$256B would move to >100bps incentive

– ~$149B would move to 50-100bps incentive

– Weighted average CPR projection: 18-22% CPR

4. Key Factors Driving Divergence:

Program Features:

– VA IRRRL has no LTV restrictions

– VA has lower funding fees

– VA has simpler documentation requirements

– VA has no seasoning requirement

Borrower Characteristics:

– VA has higher average FICOs (714 vs 675)

– VA has lower average WACs

– VA has younger average loan age

5. Projected Timeline of Response:

Months 1-3:

– VA speeds increase rapidly to 30-35% CPR

– FHA speeds lag, reaching 15-18% CPR

– Differential: 15-17% CPR

Months 4-6:

– VA peaks at 35-40% CPR – FHA reaches 20-22% CPR

– Differential: 15-18% CPR

Months 7-12:

– VA moderates to 30-35% CPR

– FHA stabilizes at 18-20% CPR

– Differential: 12-15% CPR

6. Risk Factors to Consider:

Upside Risks (faster speeds):

– Pent-up refinance demand

– Improved technology/processing

– Competitive lender environment

Downside Risks (slower speeds):

– Housing market uncertainty

– Credit tightening

– Operational capacity constraints

7. Investment Implications:

1. Premium Pools:

– Expect significantly faster VA prepayments

– Greater price impact on VA premium pools

– Wider VA-FHA OAS spreads

2. Current Coupon Pools:

– Moderate prepayment differential

– Less price impact

– Smaller OAS spread widening

3. Discount Pools:

– Minimal impact on speeds

– Limited price effect

– Stable OAS spreads

Recommendations:

1. Portfolio Positioning:

– Reduce exposure to premium VA pools

– Consider FHA alternatives in premium segment

– Maintain VA exposure in current coupon/discount segments

2. Risk Management:

– Increase prepayment protection in VA holdings

– Consider specified pool stories

– Monitor servicer behavior closely

3. Relative Value:

– Look for opportunities in FHA premium pools

– Consider VA-FHA switches in premium segment

– Evaluate specified pool pay-ups

The analysis suggests a significant divergence in VA vs FHA prepayment speeds following a 100bps rate drop, with VA speeds likely to be 12-25% CPR faster depending on the segment and timing. This differential is driven by both program features and borrower characteristics, with the impact most pronounced in premium pools.


Questions or Need Support? 

Our team is standing by to help you make the most of these new features. Contact us at techsupport@riskspan.com or reach out to your dedicated account manager. 


Interested in a Complimentary Trial or Demo?

Contact us here.


RiskSpan Launches Agentic AI for MBS Data — Instant, Transparent Insights for Agency MBS Professionals 

Arlington, VA – September 2, 2025 RiskSpan, a leading provider of data analytics solutions for the structured finance industry, today announced the release of Agentic AI for MBS Data.

Watch a one-minute demo video or read on to learn about our new, AI-powered MBS analyst that delivers instant, sourced, and context-rich analysis in plain English.

Solving the MBS Data Bottleneck

While Agency MBS loan data is readily available, meaningful insights for many users often remain locked behind complex SQL or Python queries. This creates a barrier for traders, portfolio managers, and risk teams who need answers quickly to avoid missing opportunities. 

“Our clients told us they needed faster, clearer access to this data,” said Suhrud Dagli, RiskSpan’s Co-Founder and Chief Innovation Officer. “Agentic AI removes the technical barrier and delivers the kind of immediate, transparent insight that our clients have come to demand.” 

A 24/7 AI-Powered MBS Analyst 

With Agentic AI for MBS Data, users simply ask a question in plain English — such as “Which Ginnie Mae 2020 production pools had the fastest speeds in Q2?” — and get back clear, visualized, and fully-cited answers in seconds. Ask Complex Questions, Get Instant Answers – No SQL or Python required. 

Key benefits include: 

  • Full Transparency – All sources cited for confidence and compliance. 
  • Deeper Insights – Surfaces patterns and drivers you might not think to look for. 
  • Narrative-Ready Reporting – Polished summaries and visuals for executives and clients. 
  • Efficiency – Eliminates dependence on technical teams for everyday queries. 

Designed for How MBS Professionals Think 

From spotting prepayment anomalies to explaining performance shifts, Agentic AI identifies not just the what but the why. It transforms hours of manual analysis into seconds of actionable insight, enabling faster, more informed decision-making. 

Agentic AI for MBS Data is available now. RiskSpan is offering live demos to showcase how the platform transforms the way MBS data is interrogated and reported. 


About RiskSpan 

RiskSpan delivers a single analytics solution for structured finance and private credit investors of any size to confidently make faster, more precise trading and portfolio risk decisions and meet reporting requirements with fewer resources, and less time spent managing multiple vendors and internal solutions. 

Learn more at www.riskspan.com.  


Humans in the Loop: Ensuring Trustworthy AI in Private ABF Deal Modeling

As generative AI becomes a powerful tool in Private asset-backed finance (ABF), the need for precision and transparency is more critical than ever. At RiskSpan, we’re applying Large Language Models (LLMs) to automate and accelerate private ABF deal modeling and surveillance. But speed is only half the battle—accuracy is non-negotiable.

That’s where Human-in-the-Loop (HITL) validation plays a vital role. While the RiskSpan platform incorporates sophisticated AI guardrails, we believe the right blend of automation and expert oversight ensures results that are not just fast—but reliable, auditable, and production-ready.

The AI-Powered Workflow: What’s Automated

Our private ABF modeling and surveillance system uses LLMs to handle several critical tasks:

  • Data Extraction: Parsing offering memos, indentures, and loan tapes to extract structural and financial data.
  • Deal Code Generation: Producing executable waterfall models based on extracted rules.
  • Database Ingestion: Uploading validated deal terms and triggers into the RiskSpan system of record.
  • Surveillance Automation: Running periodic deal performance analyses and compliance checks.

But What About Hallucinations?

Generative models are powerful but imperfect. Without the right controls, they can fabricate securitization tranches or fees that are not present in the deal documents. They can also misinterpret waterfall rules or omit critical override conditions or generate semantically incorrect code for cashflow models. To address these challenges, RiskSpan employs a multi-layered safeguard framework, combining asset class based extraction; LLM-as-Judge; Rule-Based Guardrails and Inline Human Review

Humans in the Loop: Three Layers of Oversight

We’ve embedded human validation at three key points in the deal lifecycle:

  1. Pre-Modeling Validation: before LLM-generated outputs are finalized, RiskSpan analysts review extracted terms and model prompts—correcting anything misaligned with the source documents.
  2. Inline Oversight: during waterfall code generation, humans validate AI-generated logic in context, ensuring correct treatment of subordination, triggers, caps/floors and other.
  3. Post-Deployment Monitoring Surveillance: outputs are reviewed both by the RiskSpan team and client-side structuring or credit teams. Feedback is looped back into model tuning and prompt optimization.

Looking Ahead: RAG and Continuous Improvement

We’re actively exploring Retrieval-Augmented Generation (RAG) to reduce hallucinations even further. By grounding AI responses in deal-specific material such as offering documents, trustee reports, and internal risk memos—we aim to: 1) eliminate off-topic responses. 2) increase trust in model-derived outputs and 3) enable deeper customization per issuer or asset class.

The Takeaway

AI is transforming how private ABF deals are modeled and monitored—but it must be grounded and guided by human expertise and built for institutional rigor. At RiskSpan, we’re not just accelerating workflows—we’re raising the bar for accuracy and trust in AI-assisted private structures. Human-in-the-loop is not a fallback—it’s a strategic pillar. Want to see how our AI platform works in action? Reach out to schedule a demo or contact your RiskSpan representative to learn more.


Design Smarter — How AI is Changing UX from Idea to Execution 

AI is revolutionizing everything, and the UX design process is no exception. From the earliest conceptual ideas all the way through to final execution, the transformation is not just about speeding up workflows but also about enhancing creativity and collaboration.  

Here’s how. 

Initial Ideation

Every UX journey begins with the ideation process. AI tools like Claude have become a go-to starting point for brainstorming and generating initial design prompts. By feeding basic requirements and user journeys into the AI, I can quickly generate a list of potential features and pain points. For example, when working on a new ETL tool, Claude helped identify potential difficulties in data mapping, handling large datasets, and ensuring data accuracy during the transformation process. These pain points helped Claude generate a list of requirements and user journeys, which were then used to create a first-pass prototype 

This initial step is crucial as it sets the foundation for the entire design process. 

Rapid Prototyping

Once the ideation phase is complete, the next step is creating a first-pass prototype. Claude has helped me here by generating quick, functional prototypes that provide a visual representation of the overall application. Although not fully functional, these prototypes nevertheless offer a solid starting point for further refinement. This rapid prototyping capability allows me to iterate quickly and incorporate feedback more efficiently. 

After the initial prototype is created, I import it into Figma for refinement. This is where the design gets polished with logos, color schemes, and other branding elements. This is a highly collaborative phase of the process, where designers work closely with developers and test users to finalize the look and feel of the application. This step ensures that the design is not only functional but also visually appealing. 

Code Development

The final stage involves turning the refined design into a working application. Here, remarkably, AI tools like Claude and Cursor (an AI-enhanced version of VS Code) can actually generate and refine the code itself. By providing the AI with an image of the final design, it can produce a close approximation of the user interface, which can then be fine-tuned by developers. For example, I might ask Claude to generate a sample layout based on the refined design and then use Cursor to make specific changes, such as adjusting font sizes and colors. This significantly reduces the time and effort required to build the front end of the application. 

Real-World Application and Testing — Collaboration and Continuous Improvement

The iterative nature of AI tools allows for rapid prototyping and testing, leading to a more efficient development cycle. While AI-generated code might not be perfect, the ability to quickly identify and fix bugs makes the process much faster than traditional methods. For instance, I used Cursor to highlight and fix errors in the code by simply providing and asking it to correct the issues. 

But collaboration remains supremely important. AI tools facilitate cross-functional teamwork by making it easier to share prototypes and gather feedback. This collaborative approach ensures that the final product meets the needs of all stakeholders. Additionally, the iterative nature of AI tools means that the design can continuously evolve based on user feedback and testing. 

—————- 

AI is not just a tool for speeding up the UX design process; it’s a catalyst for innovation and collaboration. By leveraging AI for ideation, prototyping, and code development, designers can create smarter, more efficient workflows that lead to better user experiences. The future of UX design is not just about working faster but also about working smarter. 


From AI Hype to Helpful Assistant: AI Agents are coming soon to the RiskSpan Platform!

When agentic AI first hit the scene, we were intrigued—but skeptical. Was this just another over-hyped trend or something that could drive real value?

Fast forward a few months, and we’ve got our answer.

At RiskSpan, we’ve quietly integrated AI agents into our internal workflows through a dedicated Agent Desktop. These agents are now core to how we manage our business—monitoring client health, tracking system usage and perhaps most impressively, performing deep research across the massive datasets we store. What began as an experiment has become indispensable.

The real breakthrough is manifest, however, when Agents proactively uncover insights, flag anomalies, and automate routine analyses. Our Dev and Client teams are saving hours and making faster, more informed decisions because relevant information finds them.

Coming soon, our clients will be able to use Agents in the RiskSpan Platform to query their own data, analyze GSE performance data and run on-demand analysis instantly—all without waiting on a queue or building custom reports.

Designed for portfolio risk, surveillance, analyzing loan-level data, or exploring market trends, the AI agents will help you go from question to answer in seconds. Check out a sample below and reach out to learn more!


AI-Powered Code Reviews

Our firm recently implemented a pilot that promises to dramatically accelerate our developer workflow by leveraging AI in code reviews. Feedback is now instant and actionable – and available in the very environments where our developers work.

The Problem: Time-consuming pull requests

A pull request is a developer’s proposal (after writing code to solve an issue/feature/bug) to merge changes in one branch of a code repository into (usually) the main branch. The resulting merged code is what gets promoted to production.

This is generally how junior developers submit code changes for review by more senior developers. These code reviews are critical but eat up a lot of senior developer time. Senior developers at RiskSpan face many of the same challenges as senior developers everywhere in that they juggle multiple priorities and struggle to find the time necessary to provide thorough, timely feedback on every pull request. This can lead to delays, inconsistent quality, and “technical debt” over time.

The Solution: An AI Merge Agent

At AWS re:Invent, we discovered Qodo Merge, an AI-powered tool that automates and enhances pull request reviews. Now live in our Bitbucket repositories, Qodo Merge:

  • Analyzes code changes and autogenerates pull request descriptions
  • Checks ticket compliance to ensure requirements are met
  • Flags security risks (e.g., command injection, cross-site scripting)
  • Suggests improvements in data processing, error handling, and logic
  • Provides real-time feedback, accelerating development cycles

Why this matters for our development lifecycle

This AI solution is revolutionizing and streamlining RiskSpan’s software development process by:

  • Standardizing our code review best practices
  • Reducing technical debt by enforcing quality baselines
  • Accelerating junior developers and making them more efficient by providing instant guidance
  • Freeing up senior engineers to focus their efforts on high-impact strategic work

What’s Next?

Having completed our initial pilot testing, we are now rolling out Qodo Merge across RiskSpan’s various code repositories. Next up: training sessions and broader adoption across all of our modeling and engineering teams.

AI is transforming how we build, validate, and deploy code. Stay tuned for insights on how this initiative is improving our development speed and quality!


Get Started
Log in

Linkedin   

risktech2024