[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
81
4K
More threads by Saif

G Bangladesh Defense

How AI could curb mobile banking fraud

12 April 2026, 17:00 PM
UPDATED 12 April 2026, 17:00 PM

Shuvashish Roy and Md Tuhin Rana

1776041836467.webp


As mobile financial fraud grows increasingly sophisticated, a new intelligent system tracking how users type and swipe offers a powerful shield for honest customers.

Imagine a university student in Dhaka preparing to pay her final semester tuition fees, or a small business owner in a busy market counting their monthly earnings. Suddenly, they receive an urgent phone call from someone claiming to be a customer support executive from their trusted mobile banking provider. The caller sounds incredibly professional, warning them that their accounts will be permanently blocked due to a sudden system upgrade unless they verify their details immediately. Panicked and rushed, they follow the instructions without a second thought. Within minutes of a seemingly harmless interaction, the student's entire tuition and the business owner's hard-earned profits are completely drained. These scenarios are no longer isolated nightmares, but a reality for many across the country. Fraudsters have evolved far beyond simple trickery, deploying advanced social engineering tactics that bypass standard security measures and leave regular citizens financially devastated before they even realise what went wrong.

Google News LinkFor all latest news, follow The Daily Star's Google News channel.
The mobile banking landscape in Bangladesh has experienced phenomenal growth over the last decade. Platforms such as bKash, Nagad, and Rocket have fundamentally transformed the concept of financial inclusion, bringing over 144 million registered users as of January 2026 into the formal economy as per Bangladesh Bank data, among which 5.70 lakh are relatively more vulnerable youth accounts.

Driven by rising internet penetration and the widespread availability of smartphones, everyday transactions have migrated to digital screens. However, this massive shift has simultaneously attracted highly organised fraud syndicates. As reported by The Daily Star in May 2024, a total of 48,586 personal mobile financial service accounts have been suspended by the Bangladesh Financial Intelligence Unit (BFIU) for suspected involvement in online gambling, betting, and hundi.

Fraudsters are siphoning millions of taka from unsuspecting users through fake investment schemes, cloned emergency numbers, and highly coordinated social engineering tactics. As transaction volumes surge to thousands of crores daily, the financial and emotional toll on everyday users is mounting. This growing epidemic directly threatens the core trust required for a thriving digital economy, making it a critical national issue that demands immediate intervention.

The primary vulnerability enabling these crimes lies in how current security systems operate. Traditional banking defences rely heavily on rigid, rule-based methods that simply monitor for obvious red flags, such as multiple incorrect PIN entries or exceptionally large, uncharacteristic transfers. Unfortunately, today's sophisticated fraudsters rarely hack into systems using brute force.

Instead, they manipulate victims into willingly sharing One-Time Passwords, or they use stolen credentials to log into the application in a completely standard manner. Because these criminals meticulously mimic legitimate login procedures, traditional security rules simply cannot distinguish between the actual account owner and a thief in a remote location. If the PIN matches and the One-Time Password is correct, the system blindly assumes the transaction is safe and processes the theft.

To combat this rapidly evolving threat, our research introduces a much smarter, highly adaptive framework. We focused on the emerging concept of behavioural biometrics, which operates on a simple but powerful principle: how you type, swipe, and scroll on your phone screen is as unique to you as a physical fingerprint. When this continuous behavioural data is combined with transactional patterns, such as where you are located, when you normally send money, and how much you typically transact, a highly comprehensive behavioural profile emerges.

Developing this solution involved a deliberate progression of machine learning models. We initially utilised an autoencoder to strictly profile normal user behaviour. We then moved to advanced networks capable of capturing time-based sequences, applied gradient boosting techniques, and finally combined these elements into a robust ensemble system capable of learning from vast amounts of data.

The performance results of this hybrid research work are highly encouraging for the future of mobile security. Our system achieved an impressive 97 percent fraud detection rate alongside 95 percent precision. To put this improvement into perspective, the initial baseline model operating alone missed 67 percent of fraudulent activities. This massive leap in accuracy means the system is not just catching more criminals, but doing so with remarkable exactness. The high precision rate offers a crucial practical benefit: fewer false alarms. This ensures that honest customers do not face frustrating delays or unexpectedly blocked accounts while attempting to make legitimate payments. In analysing the framework, we discovered that the most critical indicators for spotting an anomaly are the user's geographic location, combined directly with their unique scrolling and typing speeds.

For Bangladesh, adopting this kind of intelligent framework could be entirely transformative. Regulatory bodies like Bangladesh Bank, alongside leading mobile financial service providers, have an immediate opportunity to integrate these predictive models directly into their existing digital infrastructures. Because the framework is designed to be highly adaptive and locally relevant, it offers a real-time, deployable defence against the specific social engineering tactics currently prevalent in our ecosystem. Such proactive security mechanisms are absolutely essential for securing the next phase of the nation's journey toward a truly cashless society.

Securing our digital economy requires a rapid shift from reactive troubleshooting to proactive, artificial intelligence-driven defence. It is imperative for regulators, traditional banks, and fintech companies to collaboratively invest in advanced behavioural protection. By embracing these intelligent systems today, the financial sector can finally outpace the fraudsters and ensure that digital financial services remain a safe, empowering tool for every citizen.

Shuvashish Roy is a senior researcher at the Research & Innovation Division of Prime Bank PLC, and Md Tuhin Rana is a student of the Department of Statistics at the University of Dhaka​
 

AI-powered embedded finance: ethics at a crossroads

Sanjoy Pal

Published :
Apr 24, 2026 23:58
Updated :
Apr 24, 2026 23:58

1777076210334.webp


Artificial Intelligence (AI), a significant technological advancement, is currently influencing activities across the globe, from less developed rural areas to highly industrialised economies. The applications of AI are not confined to daily human life but extend to crucial sectors such as medicine, innovation, research, education, and economic growth. Automation possesses the remarkable capacity to unify the global community under a singular framework. Financial sector around the world is experiencing more service propositions under Artificial Intelligence framework. The developed economies are actively pursuing enhancements in AI integration. In contrast, the developing economies are adopting AI solutions presenting a dual-edged promise, by expanding financial inclusion for unbanked populations while also risking the amplification of inherent systemic biases. In the regions like Bangladesh, where 68 per cent of banks have a gap in operational AI policies, the ethical considerations surrounding AI is currently transitioning from initial exploratory phases towards formalised regulatory frameworks.

The global financial services landscape is being remodeled by Artificial intelligence and embedded finance. From instant loan at checkout to invisible payments by online banking application and mobile platforms, AI-powered embedded finance has profoundly reshaped user experiences. Consequently, the AI integration into embedded finance model facilitates accelerated, predictive and self-governing financial decision-making. As AI adoption in finance escalates, the industry is evolving towards "Embedded Finance 2.0" integrating "Agentic AI," characterised by autonomous systems assuming responsibility for real-time financial decisions in lieu of human intermediaries. However, a fundamental query emerges: will this technological advancement outpace ethics without consequences?

COMPREHENDING THE TRANSFORMATION: Embedded finance denotes the incorporation of financial functionalities including payments, lending, and insurance into non-financial platforms such as online marketplaces, ride-sharing services, or social media networks. Artificial Intelligence amplifies this model by facilitating instantaneous credit scoring, sophisticated fraud detection, and tailored financial advices through transforming the Embedded Finance model from '1.0' to '2.0'. The profound transformation initiates a seamless and almost invisible financial ecosystem, where decisions are executed with extreme rapidity. However, this invisibility also introduces ethical opacity.

THE BLACK BOX DILEMMA: The phenomenon of the AI's "black box" nature, as elucidated by IBM, stems from the inaccessibility of its intricate internal mechanisms, which encompass millions of parameters and complex, non-linear computations. Comprehending the decision-making processes of artificial intelligence presents a significant challenge. The underlying algorithms responsible for approving or modifying loan pricing within embedded finance platforms often lack explicit explanation and transparency. This deficiency erodes customer confidence and engenders questions regarding accountability. Determining responsibility-whether it lies with the platform, the financial institution it represents, or the algorithm itself-becomes problematic.

BIAS IN AI SYSTEMS: Historical financial data frequently exhibits ingrained inequalities, which can cause AI-driven embedded finance to inadvertently perpetuate discrimination, especially impacting low-income or marginalised communities. This danger is particularly significant in developing regions like Bangladesh and other parts of South Asia, where financial inclusion is a key objective. Inadequately constructed AI systems, rather than facilitating greater access, risk exacerbating disparities by systematically denying vulnerable segments of the population opportunities for credit or insurance.

DATA CONFIDENTIALITY AND AUTHORISATION: Embedded finance is heavily reliant on data-extensive personal, behavioural, and transactional information. AI algorithm process this data to predict user requirements and automate decisions. However, this dependency and trust concerns seriously about privacy and informed consent.

Consumers generally do not know how their data are being managed in AI platforms. In many cases, consent becomes a formality rather than a meaningful choice. In the absence of data governance frameworks, data breaches or misuse of sensitive financial information can deteriorate consumer confidence and expose institutions to legal risks.

AUTONOMY VS. MANIPULATION: In the embedded finance autonomous ecosystem, AI serves as a primary driver engineered for proactive, even predictive. It can score the customers analysing real time transactional data, suggest contextual credit offers, trigger payment routing, or recommend investments to users. While this promotes user convenience, it also shadows the distinction between supportive guidance and undue influence.

Behavioural prompts, if not ethically considered, possess the potential for exploiting cognitive biases. For instance, offering immediate credit at the point of transaction could foster spontaneous borrowing rather than prudent financial conduct. Ethically sound embedded finance necessitates that such nudges empower users rather than take advantages of them.

SYSTEMIC RISKS AND MARKET RESILIENCE: The extensive integration of AI in financial sector hosts significant systemic risks. When numerous platforms rely on similar algorithms and datasets, their decisions thereby foster "herding" tendencies in financial markets.

In the embedded finance ecosystem, where transactions are in interconnected landscape across platforms, such synchronised actions possess market volatility leading to new forms of financial instability. The risk is not just technological-it is structural.

THE GOVERNANCE CHALLENGES: Embedded finance ecosystem inherits a diverse and complex governance including the participants like fintech platforms, banks, regulators, and technology providers. This intricate network complicates accountability and regulatory oversight.

Traditional regulatory structures were not originally designed to accommodate such deeply integrated financial systems. Leaving critical issues concerning responsibility, liability, compliance, and consumer protection remain unresolved. The absence of well-defined governance mechanisms creates vulnerabilities wherein ethical transgressions may go unaddressed.

TOWARDS THE ETHICAL BEST PRACTICES: Achieving ethical standards in embedded artificial intelligence within the financial sector necessitates a comprehensive strategy. Hence, addressing the challenges stated above requires a multi-dimensional approach, including

n Transparency and Explainability: Decision-making processes should be transparent and comprehensible to users through implementation of "Explainable AI" (XAI).

n Fairness, Inclusion and Auditing: AI models should be regularly audited to mitigate biasness and promote fairnesswith regulations such as the EU AI Act.

n Data Protection: Robust safeguard measures must be ensured to safeguard privacy and obtain informed consent.

n Human Oversight: Mandating human oversight must be maintained for critical decision-making processes, rather than relying solely on automation.

n AI Ethics Committees: Cross-functional boards (legal, data science, ethics) to review AI projects for potential harms must be established within the financial institution.

n Emerging Regulatory Landscape: Regulators and policymakers must evolve frameworks to keep pace with technological advancements.

Ethical AI transcends mere regulatory compliance; it serves as the foundation for sustainable innovation. Financial institutions that consider ethics in AI as premium will not only avert adverse regulatory actions but also foster long-lasting stakeholder relationships. In a strategic maneuver by the Bangladeshi government, the "National AI Policy, Bangladesh: 2026-2030" is presently undergoing review and awaits final approval, aiming to position the nation prominently within the artificial intelligence domain among South Asian nations.

The future of embedded finance, moving toward agentic AI, relies on maintaining public trust. The evolution promises to enhance financial accessibility, improve operational efficiency, and redefine user experiences. Nevertheless, in the absence of a robust ethical framework, it carries the potential to become a mechanism for disenfranchisement, undue influence, and systemic instability. The unchecked advancement of AI in this domain risks perpetuating historical disparities into "efficient injustices," whereas proper implementation offers unprecedented opportunities for financial inclusion. Hence, the ultimate future of finance will be determined not merely by how seamlessly it integrates into our lives, but how responsibly it contributes to our well-being.

Sanjoy Pal is a researcher, banker and Visiting Faculty of an institute affiliated with National University, Bangladesh.​
 

Latest Posts

Back