In the United States, swiping a credit card, binding a card to a subscription, or topping up a stablecoin balance on an exchange usually takes only a few seconds. But behind those seemingly simple actions sits a dense mesh of risk controls, compliance checks, and privacy safeguards that runs continuously in the background.
Over the past few years, Qiming Xu has published a series of papers on payments, privacy-preserving computation and natural language processing that lay out a fairly complete technical roadmap: using machine learning and cryptography to reduce fraud, protect personal data, and help institutions build a more stable “buffer zone” between traditional payments and the fast-moving world of blockchain and crypto assets—without adding more friction for end users.
In papers such as Artificial Intelligence in Risk Protection for Financial Payment Systems and Innovation in Economic and Financial Management Models Based on Big Data Technology Analysis, Xu systematically discusses how to use machine learning and big-data frameworks to run more granular risk assessments and segmented management over banks’ and payment processors’ transaction flows. In Design of Privacy-Preserving Personalized Recommender System Based on Federated Learning, he brings federated learning into financial recommendation scenarios to avoid centralizing user-behavior data. And in Applications of Explainable AI in Natural Language Processing and Automatic News Generation and Fact-Checking System Based on Language Processing, he combines explainable AI with automated fact-checking to filter market noise and spot “news-driven” fraud schemes.
The goal is not to tune one company’s internal metrics, but to build a set of foundational capabilities that can be reused across institutions and asset types—covering both traditional fiat rails and blockchain-based crypto transactions.
From Rule Books to Adaptive Models: Smarter Security Checks Without More Friction
In traditional card-payment risk systems, many controls are hard-coded rules: a transaction over a certain amount triggers review; several cross-border payments in a short window lead to a block or extra confirmation. These mechanisms do stop some attacks, but they also frequently catch legitimate users in the net.
In his risk-control work, Xu argues for replacing a pure rules mindset with a combination of supervised and unsupervised models that learn behavior patterns instead of only watching “amount” and “location.”
Under this framework, the system looks at past spending habits, device fingerprints, merchant category, historic chargebacks, and other signals, then assigns each transaction a continuous risk score instead of a binary “approve/decline.” That changes the experience on the consumer side:
For low-risk, everyday activity—monthly subscription renewals, small grocery purchases—the system can cut back on extra challenges and reduce the number of “good customers” being blocked.
For transactions that score high but are not obviously fraudulent, the system can push an additional SMS code, 3-D Secure challenge, or other step-up verification, turning risk into a more precise identity check instead of an outright decline.
For U.S. issuers, processors, and firms running BNPL or digital-wallet products, this model offers a concrete path: rather than depressing conversion across the board, it concentrates fraud-mitigation effort and cost on the genuinely suspicious slice of traffic.
The same modeling approach can be applied directly to crypto exchanges and fiat-to-crypto gateways. When monitoring dollar deposits, stablecoin conversions, and on-chain withdrawals, the system can combine on-chain behavioral patterns with off-chain account activity and tighten reviews on transactions that touch high-risk addresses or suspicious flows—without freezing large numbers of legitimate wallets via blunt “catch-all” rules. In practice, this strengthens AML and sanctions compliance at the points where fiat meets crypto.
A “Firewall” Between Personalization and Privacy: Federated Learning for User Profiles
Another line of work that directly affects end users is how to deliver “it understands me” financial and payment services without hoovering all their data into a central database.
Traditional recommender systems usually aggregate browsing, clicking, and payment histories on servers and train models there. In e-commerce and social media, this architecture has already fueled multiple privacy backlashes.
Scroll to Continue
Recommended Articles
In Design of Privacy-Preserving Personalized Recommender System Based on Federated Learning, Xu proposes a different setup based on federated learning:
User-behavior data stays on the local device or inside an institution’s internal network.
Models are trained locally, and only parameter updates—not raw logs—are sent back to an aggregation server.
Before aggregation, techniques such as differential privacy are applied so that individual users cannot be reconstructed from the updates.
For U.S. fintech apps, mobile banking platforms, and services that include crypto wallets or trading, this means they can still deliver fine-grained personalization without dramatically increasing their privacy-compliance burden.
For example, the system can learn, on-device, a user’s risk appetite toward different crypto assets, their sensitivity to price swings, or their typical holding period. Aggregated models can then be used server-side to refine product ranking, alert thresholds, and risk notifications—without shipping clear-text behavior logs into a single data lake.
For U.S. consumers—many of whom have already been hit by high-profile data breaches—this “move the algorithm forward, keep the data back” design changes the risk profile in a very concrete way: even if a single provider is compromised, what attackers get is noisy model parameters, not a readable, person-by-person history of financial behavior.
Using Explainable AI and Automated Checks Against Herd Behavior and Scam Narratives
In markets, information itself is a source of risk—especially for retail investors and crypto holders. A fake partnership announcement or an over-promised airdrop can trigger short-term price spikes or enable outright scams.
In Applications of Explainable AI in Natural Language Processing and Automatic News Generation and Fact-Checking System Based on Language Processing, Xu lays out frameworks that combine NLP with explainable machine learning to spot anomalies in news, announcements, and social posts.
On one side, the system automatically generates structured news summaries and risk tags, labeling the entities involved, event types, and timelines.
On the other hand, by comparing against authoritative sources and historical patterns, the model flags narratives that diverge sharply from the norm and routes them for human review.
In traditional equity and bond markets, these tools help brokerages, newsrooms, and research desks filter out clickbait and factual errors. In the crypto and blockchain ecosystem, they can be used to detect classic “pump-and-dump narratives,” opaque yield promises, and projects that frequently rewrite their own stories—reducing the chance that retail traders are steered by noise rather than facts.
Explainability is key here. Instead of a black-box risk score, the model can show compliance teams and regulators why a post or white paper was flagged. Was it because the language matches past scam campaigns? Because the smart-contract address is tied, on-chain, to sanctioned entities? Or because promised returns are statistically inconsistent with comparable products? That kind of “explainable judgment process” is far easier to audit than a raw number, and fits better with how U.S. oversight is evolving.
A Technical Bridge Between Traditional Finance and the Crypto World
Taken together, these lines of research address the same overarching problem: in an environment where payment methods are more diverse and asset classes range from dollars and stablecoins to on-chain tokens, how do you bake security, transparency, and privacy into the foundations without dumping extra burden on users?
At the transaction layer, finer-grained AI risk models lower the rate of false positives while being more sensitive to genuinely risky behavior—whether that behavior shows up at a point-of-sale terminal, in a mobile wallet, or in a crypto on-ramp/off-ramp.
On the user side, federated learning and privacy-enhancing techniques give banks, fintechs, and exchanges an engineering pattern for “personalized but not overexposed” financial services.
At the information layer, explainable NLP models and automated fact-checking improve institutions’ ability to filter deceptive narratives and detect signs of market manipulation—particularly valuable in the hyper-noisy crypto-asset space.
For TheStreet’s readers who track payments, banking, and digital-asset markets, these efforts don’t show up directly as a price move in a single stock or token. They show up in everyday questions like: “Can I swipe my card safely?”, “Will I lose the coins in my wallet to a phishing DM?”, “Is this ‘bullish news’ actually reliable?”
Technical details will keep evolving. But treating security, privacy, and explainability as hard design constraints—across both traditional rails and blockchain-based systems—is increasingly shaping how the next stage of U.S. finance and crypto infrastructure gets built.
AI Risk Controls Touch Everyday U.S. Users, Building a “Safety Buffer” Between Credit Cards and Crypto
RELATED ARTICLES


