Blog

  • Markets and Tokenization: Why Financial Infrastructure Is Starting to Change

    Tokenization has long been described as one of the most promising transformations in modern finance. For years, however, the concept remained suspended between technological enthusiasm and limited market reality. That balance is now beginning to shift. What makes tokenization relevant today is no longer only the possibility of representing assets on distributed ledgers, but the growing recognition among central banks, international standard setters, and market participants that tokenization may alter how financial markets issue, trade, settle, and service assets. In other words, tokenization is increasingly being discussed not as a niche digital-asset experiment, but as a question of market infrastructure

    At its core, tokenization means creating or representing an asset on a shared, trusted, and programmable digital ledger. The IMF has framed the issue in precisely these terms, stressing that the economic significance of tokenization lies less in the label itself than in the combination of three attributes: sharedness, trust, and programmability. This matters because financial markets are still shaped by frictions that arise across the lifecycle of assets—from issuance and trading to servicing and redemption. Tokenization is attractive not because it magically removes intermediation, but because it may reduce some of these frictions by allowing multiple parties to operate on a more integrated technological environment.

    The appeal is easy to understand. Today’s market infrastructure often relies on layered and sequential processes: one system records ownership, another handles settlement, another supports custody, and others manage collateral, reconciliation, and reporting. Tokenized arrangements promise to compress some of these steps into a more unified architecture. The BIS has argued that tokenization could both improve the existing system and enable new forms of financial contracting, especially in cross-border payments and securities markets. The ECB has similarly emphasized, as recently as March 2026, that tokenized assets and DLT-based infrastructures could make wholesale financial markets faster, cheaper, more integrated, and operational on a near-continuous basis.

    This is why the tokenization debate has moved beyond crypto-native circles. Recent Eurosystem work shows that the discussion is now firmly located in mainstream financial-market policy. Between May and November 2024, the Eurosystem conducted more than fifty trials and experiments with sixty-four market participants on DLT use in wholesale financial markets, and in 2025–2026 the ECB moved toward a strategic roadmap for Europe’s tokenized finance architecture. That evolution is important because it signals an institutional change in tone: tokenization is no longer viewed only as a speculative frontier, but as a possible layer of future market plumbing.

    And yet, despite the momentum, the market remains at an early stage. The OECD has noted that although interest in tokenization has grown and the distinction between crypto-assets and regulated tokenized assets has become clearer, actual adoption remains scarce. IOSCO reaches a similar conclusion: tokenization arrangements remain a small part of the financial sector, even if issuance and experimentation are increasing in selected jurisdictions. That gap between attention and adoption is perhaps the most important fact in the entire debate. Tokenization matters, but it is not yet dominant. The technology is advancing faster than market-wide implementation.

    There are several reasons for this. First, tokenization only creates broad efficiency gains if a critical mass of market participants uses interoperable systems. Finance is a network industry: isolated efficiency inside one firm does not necessarily translate into market-wide improvement. Second, legal and operational certainty still matter more than technological elegance. Ownership, finality, custody, insolvency treatment, asset servicing, and recordkeeping all require robust answers before tokenized markets can scale. Third, many legacy infrastructures already perform their core functions reasonably well. Replacing them demands not only innovation, but a compelling cost-benefit case. The IMF has gone so far as to suggest that, in some cases, the real novelty may lie less in “sharedness” alone and more in the programmability that tokenized systems can introduce.

    That last point is crucial. The strongest case for tokenization is not simply digitization, but programmable finance. Once assets exist as programmable tokens, the possibility emerges for conditional transfers, atomic settlement, automated compliance checks, composability across financial products, and more dynamic collateral use. IOSCO identifies precisely these features—fractionalization, programmability, composability, and atomicity—as among the main reasons why tokenization may reduce market friction and expand the design space of financial products and services. This is the aspect of tokenization that may prove most transformative over time: not merely faster settlement, but new market logic embedded into the asset itself.

    The effects on markets could therefore be significant. In primary markets, tokenization may simplify issuance and broaden access to certain asset classes, including through fractionalization. In secondary markets, it may reduce reconciliation burdens and shorten settlement chains. In post-trade environments, it may improve collateral mobility and create more integrated asset servicing. More broadly, tokenization may reduce some of the informational and transactional frictions that still produce inefficiencies in capital allocation. The IMF’s analysis is particularly useful here because it treats tokenization not as ideology, but as a mechanism that may affect search costs, transaction costs, counterparty risk, and other frictions across the asset lifecycle.

    Still, efficiency is only one side of the equation. Tokenization also raises questions about market structure, competition, and concentration. If tokenized infrastructures are built by partial coalitions of brokers, exchanges, custodians, or technology providers, they may create new forms of exclusion or lock-in. An IMF working paper published in 2025 makes precisely this point: tokenized markets with faster and cheaper settlement can generate socially suboptimal outcomes if coalition formation is incomplete or interoperability is weak. This means that tokenization is not just a technical choice; it is also a policy question about access, coordination, and the architecture of financial competition.

    From a regulatory perspective, the message emerging from international bodies is relatively clear. Tokenization does not suspend existing financial-law principles. IOSCO expressly stresses that the use of a new technological medium should not, in itself, materially alter the applicability of established regulatory principles. In practice, that means investor protection, market integrity, disclosure, governance, custody, operational resilience, and conflict-management rules remain central even when assets move to tokenized environments. The real challenge is not whether regulation continues to apply, but how it should be adapted to infrastructures that are more programmable, more interconnected, and potentially more continuous than traditional systems.

    For Europe, this makes tokenization a strategic issue rather than a fashionable one. If tokenized markets develop around settlement assets, ledger interoperability, and programmable market functions, then the debate is ultimately about who will shape the rails of future finance. Recent ECB messaging suggests a strong institutional interest in ensuring that tokenized financial markets do not evolve in a fragmented manner detached from central-bank settlement foundations. The BIS has framed the issue in similarly systemic terms, arguing that tokenized platforms anchored by central bank reserves, commercial bank money, and government bonds could help lay the groundwork for a next-generation monetary and financial system.

    The most realistic conclusion is therefore neither utopian nor dismissive. Tokenization will not instantly remake financial markets, and the current level of adoption remains limited. But it is increasingly difficult to regard it as marginal. What is changing is not only the technology, but the institutional seriousness with which it is being examined. Markets may ultimately adopt tokenization not because it is novel, but because, in specific segments, it may prove better at integrating issuance, trading, settlement, servicing, and compliance into a more coherent operational framework. If that happens, tokenization will matter less as a “digital asset” story and more as a story about the redesign of market infrastructure itself.

  • Blockchain as an Enabler of Agentic AI

    Agentic AI is usually presented as the next evolutionary step beyond generative AI: systems capable not only of producing outputs, but of pursuing objectives through planning, tool use, adaptation, and multi-step execution. Yet the more AI becomes “agentic,” the more it encounters an old problem in digital systems: trust. If an agent can act, transact, delegate, negotiate, and coordinate with other agents, the key question is no longer only what it can do, but how its identity, permissions, actions, and incentives can be verified. This is precisely where blockchain may become less a speculative accessory and more an enabling infrastructure. Recent work on agentic AI identity and governance increasingly converges on four needs—authentication, provenance, auditability, and interoperable trust—which are also classic strengths of distributed ledger architectures.

    The first contribution blockchain can make to agentic AI is machine identity. Human-centric login and authorization models are poorly suited to an environment in which software agents act persistently, sometimes across platforms, on behalf of users, firms, or even other agents. The OpenID Foundation’s 2025 report on identity management for agentic AI argues that agents require new authentication and authorization frameworks precisely because they operate across contexts, hold delegated authority, and may act for multiple principals. In parallel, the W3C’s DID framework defines decentralized identifiers as a way to establish verifiable, decentralized digital identity without reliance on a single registry or identity provider. In practice, this means blockchain-based or blockchain-anchored identity systems can help assign persistent, tamper-evident identities to agents, enabling them to prove who they are, what credentials they hold, and what authority has been delegated to them.

    The second area is trustworthy provenance. Agentic systems depend on long chains of perception, retrieval, reasoning, and action. As these chains become more autonomous, the ability to reconstruct who did what, on which data, under which permissions, and with which outputs becomes critical. This is especially true in high-stakes sectors such as finance, healthcare, and public administration. Blockchain does not solve truth at the input layer, but it can create immutable logs of decisions, tool invocations, model states, approvals, and transaction histories. This can materially strengthen accountability and reduce disputes over whether an agent followed instructions, exceeded its mandate, or operated on manipulated inputs. Emerging research on blockchain-monitored agentic architectures and legal infrastructure for the “agentic web” explicitly treats distributed ledgers as a foundation for verifiable transactions, registries, and adjudicable action trails in machine-mediated environments.

    A third contribution lies in multi-agent coordination. Agentic AI becomes truly transformative when agents do not merely assist humans one by one, but interact with other agents in open or semi-open ecosystems. At that point, coordination problems emerge: discovery, role allocation, commitment enforcement, payment, dispute resolution, and reputation. Blockchain is useful here because it offers a shared state layer in which commitments can be recorded, conditions can be enforced through smart contracts, and economic interactions can occur without every participant depending on the same intermediary. Recent academic and policy discussions increasingly describe a future in which blockchain supports interoperable, economically active agent networks by providing verifiable transaction rails and shared coordination rules. In that sense, blockchain can serve as a governance substrate for machine collaboration, not just as a payment rail.

    The fourth and perhaps most practical contribution is agent-native payments and programmable incentives. Agentic systems will struggle to scale commercially if every action requiring payment, licensing, access, or settlement must be routed through human billing flows, bank forms, or closed platform wallets. Several recent discussions in the payments and digital-assets space point to programmable money, micropayments, and machine-to-machine settlement as increasingly relevant use cases for distributed systems. The BIS Innovation Hub’s 2025 workshop report identifies digital identity, programmable money, and micropayment flows among the key use cases for scalable distributed architectures. In this context, blockchain-based payment rails—especially where settlement logic is programmable—can give AI agents a native economic layer: they can pay for APIs, compute, data, storage, content, or execution outcomes in granular increments. That does not imply that every such payment must occur on a public chain, but it does suggest that tokenized and programmable infrastructures are unusually well suited to machine actors operating at internet speed.

    A fifth area is governance by design. One of the central concerns with agentic AI is that it becomes difficult to separate the intentions of the user, the constraints of the developer, the incentives of the platform, and the behavior of the model. Blockchain can help by externalizing some governance rules into transparent and reviewable mechanisms. Smart contracts, registries, credential checks, and cryptographically signed policies can create enforceable boundaries around what an agent may do, with whom, and under what conditions. This is particularly relevant for enterprise and regulated use cases, where agent autonomy must remain bounded by ex ante rules and ex post auditability. The attraction, in other words, is not decentralization for its own sake, but the possibility of embedding governance into operational architecture.

    At the same time, it would be naïve to frame blockchain as a universal cure for agentic AI’s problems. Distributed ledgers introduce their own frictions: scalability limits, privacy trade-offs, governance complexity, legal uncertainty, and integration costs. The BIS has repeatedly stressed that for digital identity, cross-border payments, and programmable money, the main barriers are not purely technical but institutional and legal. Moreover, privacy remains a major issue: immutable logging can strengthen accountability, but it can also create tensions with confidentiality, commercial secrecy, and data protection. For this reason, the most credible architectures are likely to be hybrid ones—combining off-chain computation, privacy-enhancing technologies, selective disclosure, and on-chain verification only where immutability and shared trust are truly needed.

    The broader point is that agentic AI needs an infrastructure of trust if it is to evolve beyond siloed assistants into durable economic actors. Blockchain may supply part of that infrastructure by giving agents verifiable identity, persistent memory of commitments, auditable action trails, programmable value transfer, and rule-based coordination in environments where no single platform can be assumed to be neutral or universally trusted. The likely future, therefore, is not one in which blockchain and AI converge as a matter of hype, but one in which blockchain quietly provides the institutional plumbing for agentic systems that must identify themselves, prove authorization, transact, and be held accountable. In that sense, blockchain does not make agentic AI more intelligent. It makes it more governable.

  • Agentic AI and Financial Markets: From Automation to Market Infrastructure

    Agentic AI is often described as the next stage of artificial intelligence: systems that do not merely generate content or assist human users, but can plan, execute, monitor, and adapt multi-step tasks with a degree of operational autonomy. In financial markets, this transition matters because the relevant question is no longer whether AI can support analysis, but whether it can begin to act within workflows that affect trading, distribution, payments, compliance, supervision, and ultimately market structure itself. The issue is not simply technological. It is institutional, prudential, and legal.

    The first point to clarify is that “agentic” does not mean fully autonomous in the science-fiction sense. In practice, the emerging model is one in which AI systems are given bounded objectives, access to selected tools and data, and the ability to sequence actions across a workflow: retrieving information, comparing alternatives, generating outputs, escalating exceptions, and in some cases initiating execution subject to rules or human approval. This is why agentic AI is likely to matter more in finance than many earlier AI waves. Financial markets are not only information-rich; they are process-rich. Much of their value chain consists of repetitive but high-stakes sequences of analysis, decision support, verification, and execution.

    That said, the current state of adoption suggests that the market is still in a transitional phase. ESMA’s recent evidence on AI in EU securities markets shows that in 2024 adoption remained uneven: 28% of respondents reported using AI in production or development, another 22% were experimenting or planning use within 12 months, 92% of reported use cases were internal rather than client-facing, and 90% involved a human in the loop. The main perceived benefits were improved internal processes, cost reduction, and the ability to analyse large volumes of data, while key challenges concerned data quality, data protection, and data governance. This is highly significant. It suggests that, at least in Europe, finance is not yet moving toward fully autonomous market actors; it is moving toward increasingly autonomous financial workflows.

    This distinction matters because the real impact of agentic AI on financial markets will likely be structural before it becomes spectacular. In the short term, the most visible gains will arise in research, compliance, surveillance, onboarding, documentation, internal controls, portfolio support, treasury operations, and customer interaction. The IMF has already observed that recent AI breakthroughs may dramatically increase the efficiency of capital markets through process automation and the analysis of complex unstructured data, with effects already beginning to appear in trading, investment, and asset allocation. BIS research, moreover, indicates that AI agents in payment-system cash management could reduce operational costs, improve efficiency, and enhance resilience, albeit only with adequate safeguards and human oversight.

    The next phase, however, is more consequential. Once AI systems can coordinate multiple tasks rather than merely complete one task at a time, they become relevant not only to individual firms but also to market functioning. In asset management, agentic systems may continuously gather market intelligence, reconcile internal and external data, test scenarios, produce risk memos, and propose rebalancing actions. In brokerage and wealth contexts, they may evolve into digital financial copilots that monitor portfolios, identify deviations from investment mandates, prepare client communications, and eventually trigger pre-authorised actions. In market infrastructure, they may support collateral optimisation, liquidity forecasting, payments orchestration, and exception handling. In each of these domains, the economic attraction is the same: lower latency between information and action.

    Yet this same compression of time between signal and execution is also where systemic concern begins. Financial authorities have been increasingly explicit that AI does not only create firm-level benefits; it can also amplify market-wide vulnerabilities. The FSB has warned that wider AI uptake may increase third-party dependencies and service provider concentration, particularly because effective AI deployment often relies on specialised hardware, cloud infrastructure, pre-trained models, and concentrated data services. The ECB has similarly stressed that if technological penetration and supplier concentration become simultaneously high, micro-level AI risks may become macro-relevant. In parallel, the Bank of England has emphasised the need for a flexible and forward-looking monitoring framework, precisely because rapid shifts in AI capabilities can translate into new financial stability risks.

    This is the core paradox of agentic AI in finance. The more capable the technology becomes, the more institutions may rely on the same foundational stack: the same cloud providers, the same model vendors, the same data sources, the same optimisation logics, and perhaps, over time, the same execution pathways. This creates at least four market-level risks.

    First, there is a correlation risk. If many firms rely on similar models trained on similar data and optimised around similar objectives, their reactions to market signals may converge. The FSB explicitly notes that broader AI usage may lead to common modelling approaches and common training data sources, thereby increasing market correlations. In normal times this may simply look like efficiency; under stress it may resemble synchronised behaviour.

    Second, there is operational concentration risk. The Bank of England and FCA found that one third of AI use cases in UK financial services were already third-party implementations in 2024, and that the top three providers accounted for 73% of cloud providers, 44% of model providers, and 33% of data providers reported in the survey. That finding is striking because it shows how quickly AI can deepen already familiar outsourcing and cloud-dependency issues. Agentic systems may intensify this trend, as firms may prefer turnkey orchestration layers over costly internal development.

    Third, there is opacity risk. Agentic systems can make finance faster while making responsibility harder to localise. If an AI-driven workflow proposes, ranks, filters, escalates, and partially executes actions, the traditional chain of accountability becomes harder to reconstruct. This is one reason explainability is becoming a supervisory priority. BIS Innovation Hub’s Project Noor is expressly designed to help supervisors evaluate the inner workings of AI models used by banks and other financial institutions, including their transparency, fairness, and robustness. The direction of travel is clear: markets may tolerate complexity, but supervisors will increasingly insist on auditability.

    Fourth, there is conduct and consumer risk. As AI moves closer to financial recommendations and action initiation, the distance between “assistance” and “advice” narrows. ESMA has already warned retail investors about the use of public AI tools for investing, noting that such tools may produce inaccurate or misleading outputs and can lead to poor investment decisions and significant losses. In an agentic setting, that concern becomes even more acute, because the issue is no longer only whether a recommendation is persuasive, but whether the surrounding system architecture nudges or automates action.

    For Europe, the regulatory context is now impossible to ignore. The EU AI Act entered into force on 1 August 2024 and becomes fully applicable on 2 August 2026, with phased obligations already in effect for prohibited practices, AI literacy, governance, and GPAI-related duties. For financial institutions, the AI Act will not replace sectoral rules; it will interact with them. In other words, firms deploying agentic AI in finance will need to manage a layered framework composed of financial regulation, outsourcing and operational-resilience requirements, data protection law, consumer-protection rules, model governance, and—where relevant—the AI Act itself. Recent ECB supervisory messaging captures the position well: the objective is not to slow AI adoption, but to ensure that banks integrate it prudently, coherently, and under effective control before usage reaches systemic scale.

    The most plausible conclusion, then, is not that agentic AI will “replace” financial markets or human judgment. Rather, it will reconfigure the allocation of judgment inside market processes. Humans will move upward, toward governance, exception management, strategic oversight, and ex post accountability, while machines move downward and sideways, into monitoring, triage, orchestration, and conditional execution. The firms that benefit most will probably not be those that pursue maximal autonomy, but those that design credible boundaries around autonomy.

    For Prometeus Fintech Journal, the key takeaway is therefore a sober one. Agentic AI is not simply another productivity tool for front-office experimentation. It is becoming part of the institutional fabric of finance. Its effect on markets will depend less on how impressive the models appear in demo environments and more on how their deployment reshapes incentives, dependencies, accountability chains, and systemic interconnections. The decisive legal and policy question is no longer whether AI can think about markets. It is whether financial markets can remain governable once AI systems begin to act within them.