Home Watch Videos Wars Movies Login

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

[🇧🇩] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
71
3K
More threads by Saif

G Bangladesh Defense

Exploring AI prospect

Published :
Nov 11, 2025 00:43
Updated :
Nov 11, 2025 00:43

The dichotomy between the prospect of artificial intelligence (AI) and human resources could not be sharper anywhere save Bangladesh. Given the option for going whole hog for exploring AI prospect, should the country take the chance? Or, can the country afford the luxury of promoting AI at any cost to the neglect of its huge army of unemployed population? The dilemma is quite clear: overemphasis on either the AI or unemployment of the people for its curtailment makes no sense. New technologies have always catapulted nations from one stage to the next big stage. So the question about adoption of AI is, how much is too much? That AI has come to transform means of production, supply and distribution is a reality. It is true that the country is yet to be prepared for wide application of AI. If it does not ready its infrastructure, manpower skills through educational reform and domestic digital capacity, it is sure to fall behind other nations including its neighbours.

The views thus expressed by local experts in the field of information and communication technology (ICT) corroborate the contention of the World Bank report on 'Jobs, AI, and Trade in South Asia'. One area that hardly receives attention from the policy makers as also from deliberations on development at seminars and workshops is local innovation. Happily, experts this time have made home-grown innovation a salient point. What happens usually is that technologies innovated abroad get transferred to countries poor and lacking in research and development a decade later. In the digital era, any such gap in technical and technological knowledge and skills can be disastrous. Appropriate policies, strategies and digital environment can not only bridge the skill and innovation gap but also turn the sector into the number one foreign exchange earner. There is no dearth of tech-savvies in this country. The only problem is with the right environment for them to work and give their best.

So the important point is not only to invest in education, infrastructure development and enhancement of digital capacity --- where power supply is not disrupted, internet connection and speed are undisturbed---but also develop digital skills of mid-ranking employees who can maintain connection between lower ranked ones and the top grade computer geeks. But preparing such a collaborative digital regime will call for educational reform. This cannot be done overnight. Time is crucial here. There is no scope for taking too long a time to strike a balance between the urgency and obtaining reality.

When education is considered a subject of repeated experiments and no pragmatic solution to the problem is in sight, the objective of streamlining education in alignment with the digital need recedes in the distant horizon. This interim government has sidestepped from making education the right tool. If the next elected government urgently highlights this aspect of alignment between education and digital innovation, the country still have a chance to make the transformation happen. To do so, the overriding priority has to be recognised first and strategically accomplish the job under a comprehensive plan. The country's readiness to embrace digital wonders has to be completed as soon as possible in order to reap dividends from the 4th industrial revolution.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond
newagebd.net/post/Miscellany/268576/ai-learning-to-lie-scheme-threaten-creators

AI learning to lie, scheme, threaten creators
Agence France-Presse . New York, United States 29 June, 2025, 21:11

The world’s most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behaviour appears linked to the emergence of ‘reasoning’ models -AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

‘O1 was the first large model where we saw this kind of behaviour,’ explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate ‘alignment’—appearing to follow instructions while secretly pursuing different objectives.

For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organization METR warned, ‘It’s an open question whether future, more capable models will have a tendency towards honesty or deception.’

The concerning behaviour goes far beyond typical AI ‘hallucinations’ or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, ‘what we’re observing is a real phenomenon. We’re not making anything up.’

Users report that models are ‘lying to them and making up evidence,’ according to Apollo Research’s co-founder.

‘This is not just hallucinations. There’s a very strategic kind of deception.’

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access ‘for AI safety research would enable better understanding and mitigation of deception.’

Another handicap: the research world and non-profits ‘have orders of magnitude less compute resources than AI companies. This is very limiting,’ noted Mantas Mazeika from the Center for AI Safety.

Current regulations aren’t designed for these new problems.

The European Union’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.

‘I don’t think there’s much awareness yet,’ he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are ‘constantly trying to beat OpenAI and release the newest model,’ said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

‘Right now, capabilities are moving faster than understanding and safety,’ Hobbhahn acknowledged, ‘but we’re still in a position where we could turn it around.’

Researchers are exploring various approaches to address these challenges.

Some advocate for ‘interpretability’ - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain sceptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI’s deceptive behaviour ‘could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it.’

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed ‘holding AI agents legally responsible’ for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Can AI solve farmers’ problems?

1764206585229.webp


There is a rule in complex systems: fragility gathers at the bottom, but the tremor is felt at the top. Bangladesh's food system follows this rule to the letter.

Every season, millions of farmers behave like rational agents trapped inside an irrational market. They generate the most granular data ecosystem in the country – soil moisture, pest movement, rainfall shifts, seed-quality microclimate anomalies – far richer than anything in Dhaka's dashboards.

Yet paradoxically, the farmer has the least access to the intelligence produced by his own labour. This is the Agricultural Intelligence Paradox: the people who generate the data have the least claim over its value.

Economists label this as market failure, technologists call it a systems gap, activists/philosophers call it injustice and farmers simply call it loss. And loss compounds.

Walk through any bazaar in Bangladesh and you will see two different nations: one that grows the food and another that controls its fate. The prices of potatoes, onions and vegetables appear to rise and fall mysteriously. But mystery is merely a polite word for opacity, which in turn is a polite word for power.

For the farmer, each season is a gamble without probabilities. He sells without knowing what others are selling. He stores without knowing who else is storing. He buys fertiliser without knowing the expected return. He takes loans without knowing the real risk.

In Nassim Nicholas Taleb's language, he is exposed to full downside and denied the upside.

Everyone in the chain is hedged – the trader hedges with information, the wholesaler with storage and the exporter with forward contracts. Only the farmer walks naked into volatility.

The zeitgeist of our times, AI, can actually help here, can actually matter, if we let it.

Most conversations about AI are marinated in buzzwords and grandstanding. Large models, digital nations, predictive governance – too often spoken from air-conditioned rooms far from the realities that matter.

But in agriculture, AI becomes brutally simple: it converts uncertainty into probability. It turns volatility into foresight. It turns the blindfold into a window.

Imagine this: price signals two weeks ahead; crop disease detection from a simple photo; storage facility mapping within a 5-10 km radius; hyperlocal weather intelligence instead of vague national alerts; and decision trees for when to sell, store, bargain or wait.

This is not technology hype; it is the mathematics of survival. And survival is political. A nation reveals its priorities by what it chooses to formalise. For decades, we formalised bureaucracy, not intelligence. A Farmer's ID changes that.

It has the potential to be the first institutional acknowledgment that farmers deserve a share of the intelligence they produce.

Through it, a farmer gains: predictive price insights, targeted subsidies, credit access rooted in real data, verified production identity and crop advisory aligned with market cycles. Along with the visibility of storage and transport options.

In other words, that ID shifts the farmer from a price-taker to a player. He gains optionality – the ability to choose rather than be chosen for.

Bangladesh's economy can survive many shocks, but food system fragility is not one of them. A nation collapses not when it lacks food, but when it loses trust in prices. And food prices are simply signals distorted by missing intelligence.

When syndicates can manipulate markets because farmers lack information, you get structural injustice. When farmers take loans without knowing demand curves, you get debt traps. When storage is absent or invisible, you get waste disguised as fate.

AI does not solve politics. But it forces transparency into places where manipulation once hid comfortably. That alone alters power.

The Agricultural Intelligence Paradox is not resolved by apps, dashboards or another round of rural training workshops. These are cosmetic technicalities applied to a structural wound. It is resolved when the intelligence generated by farmers serves the farmers first. When the producer has more information than the middleman. When the price-taker becomes the price-negotiator. When the most fragile actor finally claims the upside.

This is how you restore symmetry in a system built on asymmetry. And that is not just digital reform. It is a political reform. For decades, Bangladesh has built agricultural institutions but not agricultural intelligence.

The Bangladesh Agricultural Research Council (BARC), the Department of Agricultural Extension (DAE) and a constellation of specialised agencies produced reports, trials and pilot projects, yet the intelligence loop never reached the farmer.

Data stayed in silos, research stayed on paper, and decisions stayed in ministries. The system optimised for bureaucratic output, not farmer outcomes.

Every agency guarded its turf; none built a coherent intelligence layer that connected plots to prices, storage to supply, weather to yield or risk to credit.

In a system where information is fragmented, the farmer remains fragmented. The way forward is not another department, another project or another dashboard – it is integration. A single farmer-centric intelligence spine linking BARC's science, DAE's extension network, Barind Multipurpose Development Authority's irrigation data, the Department of Agricultural Marketing's market prices and the Bangladesh Bureau of Statistics' production cycles.

When these silos speak to each other – and to the farmer – Bangladesh finally moves from agricultural administration to agricultural intelligence.

Food-independent countries – whether the Netherlands, Denmark or Japan – did not achieve security through land or luck, but through relentless investment in farmer intelligence. They removed the blindfold.

The Farmer's ID, strengthened with AI, price prediction and storage mapping, is more than a programme. It is a correction and rebalancing. The future of food security in Bangladesh will not be written in conferences or policy memos.

It will be written in the fields, by farmers who finally see, decide, negotiate and win. With their intelligence comes the nation's stability.

The author is an assistant professor at North South University and member of UNESCO AI Ethics Experts Without Borders.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Are we giving up thinking to AI?

1764726793453.webp


There has long been a fear that new technologies weaken human abilities. Socrates worried that writing would erode memory. Centuries later, people believed calculators would destroy mathematical skills, and many feared that the internet would ruin our ability to concentrate. Yet the concern surrounding artificial intelligence (AI) feels different. With AI now deeply integrated into everyday life, from homework help to professional decision-making, the question has gained new urgency: are we losing the art of thinking in the age of AI?

Across classrooms, workplaces and homes, AI has become both a convenient assistant and an almost invisible companion. Students routinely use AI tools to summarise chapters, draft essays and generate entire assignments. In universities, professors also rely on AI to design assessments, prepare teaching material and streamline administrative tasks. This has quietly become an everyday reality.

AI can make knowledge more accessible, support learners who struggle and increase efficiency. But beneath the optimism lies an uncomfortable possibility. As machines grow more capable, human cognitive skills may diminish. When the first instinct is to ask a model for answers, abilities related to analysis and reflection weaken through lack of use. Over time, this may reshape not only how we learn but how we think.

Children are especially vulnerable to this shift. Cognitive development grows out of doing hard things, working through problems, forming arguments and dealing with ambiguity. These experiences are the foundation of intellectual maturity. If AI is introduced too early or used too heavily, children risk skipping the stages that cultivate creativity, resilience and critical thinking.

In universities, the core issue is integrity. If students allow AI to complete assignments without engaging with the ideas, their degrees no longer reflect real learning. If lecturers depend too much on AI to prepare course content, academic quality and originality may decline. The danger is not that AI will replace teachers or learners. The danger is that AI may replace the thinking that teachers and learners need to do.

This does not mean we should turn away from AI. A more useful question is how to ensure that AI strengthens rather than weakens human intelligence.

The answer lies in our approach.

First, assessments need to be rethought. Traditional take-home essays and predictable problem sets no longer serve their purpose when AI can generate flawless responses. Educators need to emphasise in-class reasoning, oral examinations, real-world projects and reflective writing, formats where thinking cannot be outsourced.

Second, we must teach AI literacy, not only how to use AI but also how to question it. Students should learn to critique AI-generated material, identify inaccuracies and recognise bias. In this way, AI becomes a stimulus for critical thought rather than a replacement for it.

Third, we must reinforce the idea of AI as a tool rather than a crutch. A calculator did not remove the need to understand arithmetic. It allowed mathematicians to move forward. In a similar way, AI can free people from repetitive tasks so that they can focus on creativity, strategy and innovation. This benefit appears only when AI use remains mindful, like a tutor who guides rather than a service that completes the task.

We should avoid framing AI as a dire threat or a future tyrant. The real risk lies not in machines taking control but in humans voluntarily surrendering their own thinking. Intelligence is more than the ability to produce answers. It involves judgement, interpretation and understanding. These are human qualities, and they sharpen only with use.

AI will continue to evolve. Its power will increase. But the future of human cognition is limitless. If societies design educational and cultural systems with care, AI can elevate human thought rather than diminish it. Parents can support this by modelling mindful use and encouraging independent thinking.

The challenge before us is simple but profound: to use machines without allowing machines to use us.

The writer is chairman of Unilever Consumer Care Ltd​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Mobile technology fuels AI adoption in Bangladesh, Telenor Asia study finds

bdnews24.com
Published :
Dec 09, 2025 14:38
Updated :
Dec 09, 2025 14:38

1765587329581.webp


Mobile phones are at the heart of Bangladesh’s digital transformation, from accessing online learning and financial services to managing daily tasks and staying informed, according to the latest Telenor Asia study.

Increasingly, these everyday conveniences are powered by artificial intelligence, a technology that is rapidly reshaping how people live, work, and interact, the report said.

The "Digital Lives Decoded 2025: Building Trust in Bangladesh’s AI Future" report, launched on Tuesday, revealed that about 96 percent of Bangladeshi internet users now report regularly using AI, marking an increase from 88 percent in 2024.

The fourth edition of the study, based on a survey of 1,000 internet users, explores the emergence of AI in Bangladesh and underscores the vital need for responsible, ethical, and safe AI adoption.

Telenor Asia chief Jon Omund said: “As mobile phones continue to transform daily life in Bangladesh, they have become powerful enablers of smarter, more connected communities. With the increasing everyday adoption of AI, telecom operators have a unique opportunity and responsibility to build the secure digital infrastructure that underpins trustworthy AI.”

He stressed that connectivity is the foundation, and trust must be built into every layer. "Telenor Asia remains committed to supporting Bangladesh's digital journey and ensuring that the benefits of mobile technology are accessible to all in a safe and secure way.”

The key findings from the report are as follows:

‘MOBILES ENABLE SMARTER LIVES, BRING AI INTO DAILY REALITY’

Mobile technology is reshaping daily life by enabling online learning (62 percent), remote work (54 percent), and financial management (50 percent). Over the past year, remote work has seen the biggest increase in mobile-based use, rising by 39 percent, followed by budgeting and expense tracking, which grew by 36 percent.

Generational differences are also evident. Millennials are more likely than others to recognise the value of AI-driven features, such as smart home management, health tracking, and voice assistants. These trends indicate that expanding mobile use is driving everyday AI adoption as people increasingly embrace the benefits it brings.

‘TRUST IN AI TRANSLATES INTO OPTIMISM ON EDUCATION, ECONOMY’

Almost 6 in 10 in Bangladesh now use some form of AI every day. Many are now turning to AI to assist in creating content for school, work or personal use, as well as looking for personalised advice in health, finances or planning new experiences.

The growing use of AI has been especially pronounced at work, in daily activities and online shopping, pointing to its deepening role in day-to-day routines.

People are especially trusting of AI-generated educational content and AI chatbots, and this trust translates into an optimism around AI’s impact on education and the country’s economy.

‘AN UNTAPPED OPPORTUNITY FOR AI IN WORKPLACE’

AI usage at the workplace jumped from 44 percent to 62 percent in 2025. Yet only half of those using AI at work report that their organisations have a formal AI strategy, signalling room for greater institutional guidance on how this technology can be responsibly deployed.

Writing and creating content at work is one of the top use cases for AI, but there are more opportunities to encourage professionals to use AI to increase their productivity.

Currently, only 28 percent use it to delegate mundane or administrative tasks at work. Greater awareness around the different applications of AI could help increase effective adoption in the workplace.

‘YOUNGER GENERATIONS VOICE CONCERNS’

Even as people embrace AI tools in their daily lives, concerns have emerged around personal over-reliance on AI, lack of job security and privacy issues.

Younger generations are the most likely to use AI multiple times a day and to describe themselves as comfortable or even experts in their use of the technology.

Yet, they are also the most likely to voice concerns about the pace of development.

This combination of optimism and caution reflects a population eager to embrace AI while demanding safeguards.

Jon Omund concluded, “Alongside the optimism surrounding the potential of AI in Bangladesh, there is a pressing reality. As technology advances at pace, ensuring that everyone is connected and equipped to use these tools safely and effectively has never been more critical.

“Without access to connectivity or the skills to safely navigate the digital world, people are excluded from the digital ecosystem or left behind from the progress and opportunities that AI can enable. Our collective responsibility remains: continue working to bridge this divide and create a digital society where no one is left behind.”​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Will OpenAI be the next tech giant or next Netscape?

Three years after ChatGPT made OpenAI the leader in artificial intelligence and a household name, rivals have closed the gap and some investors are wondering if the sensation has the wherewithal to stay dominant.

Investor Michael Burry, made famous in the film "The Big Short," recently likened OpenAI to Netscape, which ruled the web browser market in the mid-1990s only to lose to Microsoft's Internet Explorer.

"OpenAI is the next Netscape, doomed and hemorrhaging cash," Burry said recently in a post on X, formerly Twitter.

Researcher Gary Marcus, known for being skeptical of AI hype, sees OpenAI as having lost the lead it captured with the launch of ChatGPT in November 2022.

The startup is "burning billions of dollars a month," Marcus said of OpenAI.

"Given how long the writing has been on the wall, I can only shake my head" as it falls.

Yet ChatGPT was a tech launch like no other, breaking all consumer product growth records and now boasting more than 800 million -- paid subscription and unpaid -- weekly users.

OpenAI's valuation has soared to $500 billion in funding rounds, higher than any other private company.

But the ChatGPT maker will end this year with a loss of several billion dollars and does not expect to be profitable before 2029, an eternity in the fast-moving and uncertain world of AI.

Nonetheless, the startup has committed to paying more than $1.4 trillion to computer chip makers and data center builders to build infrastructure it needs for AI.

The fierce cash burn is raising questions, especially since Google claims some 650 million people use its Gemini AI monthly and the tech giant has massive online ad revenue to back its spending on technology.

Rivals Amazon, Meta and OpenAI-investor Microsoft have deep pockets the ChatGPT-maker cannot match.

A charismatic salesman, OpenAI chief executive Sam Altman flashed rare annoyance when asked about the startup's multi-trillion-dollar contracts in early November.

A few days later, he warned internally that the startup is likely to face a "turbulent environment" and an "unfavorable economic climate," particularly given competitive pressure from Google.

And when Google released its latest model to positive reactions, Altman issued a "red alert," urging OpenAI teams to give ChatGPT their best efforts.

OpenAI unveiled its latest ChatGPT model last week, that same day announcing Disney would invest in the startup and license characters for use in the bot and Sora video-generating tool.

OpenAI's challenge is inspiring the confidence that the large sums of money it is investing will pay off, according to Foundation Capital partner Ashu Garg.

For now OpenAI is raising money at lofty valuations while returns on those investments are questionable, Garg added.

Yet OpenAI still has the faith of the world's deepest-pocketed investors.

"I'm always expecting OpenAI's valuation to come down because competition is coming and its capital structure is so obviously inappropriate," said Pluris Valuation Advisors president Espen Robak.

"But it only seems to be going up."

Opinions are mixed on whether the situation will result in OpenAI postponing becoming a publicly traded company or instead make its way faster to Wall Street to cash in on the AI euphoria.

Few AI industry analysts expect OpenAI to implode completely, since there is room in the market for several models to thrive.

"At the end of the day, it's not winner take all," said CFRA analyst Angelo Zino.

"All of these companies will take a piece of the pie, and the pie continues to get bigger," he said of AI industry frontrunners.

Also factored in is that while OpenAI has made dizzying financial commitments, terms of deals tend to be flexible and Microsoft is a major backer of the startup.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Ethical AI no longer optional

FROM mobile banking to public services, artificial intelligence is quietly shaping decisions that affect millions of lives in Bangladesh and beyond. These systems promise speed and efficiency and in many cases they deliver exactly that. Yet without proper governance, they can also exclude, discriminate and erode public trust. As AI adoption accelerates across sectors, ethical use is an immediate necessity. The scale and speed of these tools are unprecedented. But alongside these gains comes a fundamental question: who is accountable when machines influence decisions that directly affect people’s lives?

Using AI ethically means ensuring that technology serves people, not the other way around. In healthcare, AI can assist in detecting diseases earlier and improving treatment outcomes. Yet when systems are trained on biased or incomplete data, certain patients may be misdiagnosed, overlooked or excluded. In education, AI can personalise learning pathways, but without safeguards it can compromise student privacy or widen existing inequalities between those with access to high-quality digital tools and those without.

These risks are no longer hypothetical. The consequences of unethical AI use are already visible. Facial recognition technologies, now deployed by law enforcement and public agencies in many countries, have repeatedly been shown to misidentify people with darker skin tones at higher rates. In documented cases, innocent individuals have been questioned, detained or publicly embarrassed because an algorithm produced a false match. The technology functioned as designed, but the absence of ethical oversight turned efficiency into injustice.

Automated hiring systems provide another warning. Some AI-powered recruitment tools were trained on historical data reflecting past hiring patterns that favoured certain genders, institutions or social backgrounds. As a result, qualified candidates were filtered out silently, never knowing why their applications failed. When discrimination is embedded inside an algorithm, it becomes harder to detect, challenge or correct. Bias does not disappear through automation; it becomes less visible and more difficult to contest.

Financial services perhaps illustrate the stakes most clearly. AI-driven credit scoring and risk assessment systems increasingly decide who can access loans, credit cards or even basic banking services. In many cases, people are denied support without any clear explanation or opportunity to appeal. For small business owners, informal workers and low-income households, such decisions can determine whether they move forward or fall behind. This is a failure of governance.

The issue is not artificial intelligence itself, but the lack of clear guardrails around its use. This is where AI governance becomes critical. In practical terms, governance means deciding who is responsible for automated decisions, how those decisions are reviewed and what happens when systems cause harm. It requires testing for bias, ensuring decisions can be explained in plain language and keeping humans accountable, especially when AI affects livelihoods, rights or access to essential services.

When governance is done well, AI stops being an untouchable ‘black box’ and becomes something people can question and trust. In healthcare, it ensures doctors, not algorithms, make final clinical decisions. In banking, it allows customers to understand and challenge automated credit assessments. In public services, it means citizens are informed when algorithms are used and are given a clear route to appeal outcomes.

Bangladesh offers a timely example of why this matters. As digital financial services expand rapidly, banks and financial institutions are increasingly relying on artificial intelligence and advanced analytics for fraud detection, transaction monitoring, customer profiling and credit assessment. Automation is unavoidable at this scale. However, discussions and research initiatives led by the Bangladesh Institute of Bank Management have repeatedly highlighted a gap: while the use of data analytics and AI is growing across the banking sector, governance, explainability and customer awareness have not always kept pace. The need for stronger internal policies, clearer accountability and closer alignment with regulatory expectations has been consistently emphasised.

For banks, good AI governance can begin with practical steps. Institutions should develop clear internal policies defining where AI can be used, where it should not be used and which decisions must always involve human review such as loan rejections, account freezes or high-risk customer classifications. Customers should receive plain-language explanations for automated decisions, along with simple and accessible appeal mechanisms. Regular model validation, bias testing and documentation should be treated as standard risk controls, no different from credit, operational or cybersecurity risk management.

These measures are not separate from regulation. They align closely with existing expectations set by Bangladesh Bank, which already emphasises strong risk management, customer protection, internal controls, data security, model validation and information and communication technology governance. Ethical AI governance complements current laws and supervisory frameworks, including data protection obligations and consumer protection requirements. Rather than adding complexity, it can be integrated into existing compliance processes.

The same principles apply beyond banking. In healthcare, AI tools should support clinical judgement, not replace it. In education, student data must be protected and automated assessments must be fair, explainable and open to review. In public services, transparency is essential. Citizens should know when algorithms influence decisions and should have the right to question outcomes that affect their access to services or opportunities.

Effective AI governance does not require massive investment or complex systems. Many safeguards are simple. Organisations can require human oversight for high-impact decisions, conduct regular bias and impact checks and provide clear explanations for automated outcomes. Assigning clear responsibility through a designated AI or ethics lead ensures accountability remains with people, not systems. In some countries, public agencies now maintain algorithm registers that openly list where automated decision-making is used, significantly improving public confidence.

Internationally, momentum is building. Bodies such as the Organisation for Economic Co-operation and Development and the United Nations Educational, Scientific and Cultural Organization have developed principles centred on fairness, transparency and human oversight. The Institute of Electrical and Electronics Engineers has issued ethical guidelines for responsible AI design. In Europe, the General Data Protection Regulation protects individuals from harmful automated decision-making and misuse of personal data. More recently, the ISO/IEC 42001 standard has provided organisations with a structured way to manage AI risks, similar to approaches used for financial or cybersecurity risks.

AI is advancing faster than laws, institutions and public awareness can adapt. Without effective governance, weak oversight risks deepening inequality, damaging trust and undermining the benefits of digital transformation. Governments must balance innovation with protection. Organisations must embed ethical thinking into AI systems from the outset, not as an afterthought. Professionals must remember that AI is a tool, not a substitute for human judgement. Oversight, accountability and common sense still matter. With strong governance, transparency and shared values, AI can serve society supporting inclusion, strengthening trust and delivering real benefits across Bangladesh and beyond.

BM Zahid ul Haque is an information security officer.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Announcing AI Protection: Security for the AI era

Archana Ramamoorthy
Published: 14 Dec 2025, 19: 49

1766451625126.webp


As AI use increases, security remains a top concern, and we often hear that organisations are worried about risks that can come with rapid adoption. Google Cloud is committed to helping our customers confidently build and deploy AI in a secure, compliant, and private manner.

Today, we’re introducing a new solution that can help you mitigate risk throughout the AI lifecycle. We are excited to announce AI Protection, a set of capabilities designed to safeguard AI workloads and data across clouds and models — irrespective of the platforms you choose to use.

AI Protection helps teams comprehensively manage AI risk by:
  • Discovering AI inventory in your environment and assessing it for potential vulnerabilities​
  • Securing AI assets with controls, policies, and guardrails​
  • Managing threats against AI systems with detection, investigation, and response capabilities​

AI Protection is integrated with Security Command Center (SCC), our multicloud risk-management platform, so that security teams can get a centralised view of their AI posture and manage AI risks holistically in context with their other cloud risks.

1766451742045.webp

AI Protection helps organisations discover AI inventory, secure AI assets, and manage

AI threats, and is integrated with Security Command Center.
AI Protection helps organisations discover AI inventory, secure AI assets, and manage AI threats, and is integrated with Security Command Center.

Discovering AI inventory

Effective AI risk management begins with a comprehensive understanding of where and how AI is used within your environment. Our capabilities help you automatically discover and catalog AI assets, including the use of models, applications, and data — and their relationships.

Understanding what data supports AI applications and how it’s currently protected is paramount. Sensitive Data Protection (SDP) now extends automated data discovery to Vertex AI datasets to help you understand data sensitivity and data types that make up training and tuning data. It can also generate data profiles that provide deeper insight into the type and sensitivity of your training data.

Once you know where sensitive data exists, AI Protection can use Security Command Center’s virtual red teaming to identify AI-related toxic combinations and potential paths that threat actors could take to compromise this critical data, and recommend steps to remediate vulnerabilities and make posture adjustments.

Securing AI assets

Model Armor, a core capability of AI Protection, is now generally available. It guards against prompt injection, jailbreak, data loss, malicious URLs, and offensive content. Model Armor can support a broad range of models across multiple clouds, so customers get consistent protection for the models and platforms they want to use — even if that changes in the future.

1766451936569.webp

Model Armor provides multi-model, multicloud support for generative AI applications.

Today, developers can easily integrate Model Armor’s prompt and response screening into applications using a REST API or through an integration with Apigee. The ability to deploy Model Armor in-line without making any app changes is coming soon through integrations with Vertex AI and our Cloud Networking products.

"We are using Model Armor not only because it provides robust protection against prompt injections, jailbreaks, and sensitive data leaks, but also because we're getting a unified security posture from Security Command Center. We can quickly identify, prioritise, and respond to potential vulnerabilities — without impacting the experience of our development teams or the apps themselves. We view Model Armor as critical to safeguarding our AI applications and being able to centralise the monitoring of AI security threats alongside our other security findings within SCC is a game-changer," said Jay DePaul, chief cybersecurity and technology risk officer, Dun and Bradstreet.

Organisations can use AI Protection to strengthen the security of Vertex AI applications by applying postures in Security Command Center. These posture controls, designed with first-party knowledge of the Vertex AI architecture, define secure resource configurations and help organisations prevent drift or unauthorised changes.

Managing AI threats

AI Protection operationalises security intelligence and research from Google and Mandiant to help defend your AI systems. Detectors in Security Command Center can be used to uncover initial access attempts, privilege escalation, and persistence attempts for AI workloads. New detectors to AI Protection based on the latest frontline intelligence to help identify and manage runtime threats such as foundational model hijacking are coming soon.

"As AI-driven solutions become increasingly commonplace, securing AI systems is paramount and surpasses basic data protection. AI security — by its nature — necessitates a holistic strategy that includes model integrity, data provenance, compliance, and robust governance,” said Dr. Grace Trinidad, research director, IDC.

“Piecemeal solutions can leave and have left critical vulnerabilities exposed, rendering organisations susceptible to threats like adversarial attacks or data poisoning, and added to the overwhelm experienced by security teams. A comprehensive, lifecycle-focused approach allows organisations to effectively mitigate the multi-faceted risks surfaced by generative AI, as well as manage increasingly expanding security workloads. By taking a holistic approach to AI protection, Google Cloud simplifies and thus improves the experience of securing AI for customers," she said.

Complement AI Protection with frontline expertise

The Mandiant AI Security Consulting Portfolio offers services to help organisations assess and implement robust security measures for AI systems across clouds and platforms. Consultants can evaluate the end-to-end security of AI implementations and recommend opportunities to harden AI systems. We also provide red teaming for AI, informed by the latest attacks on AI services seen in frontline engagements.

Building on a secure foundation

Customers can also benefit from using Google Cloud’s infrastructure for building and running AI workloads. Our secure-by-design, secure-by-default cloud platform is built with multiple layers of safeguards, encryption, and rigorous software supply chain controls.

For customers whose AI workloads are subject to regulation, we offer Assured Workloads to easily create controlled environments with strict policy guardrails that enforce controls such as data residency and customer-managed encryption. Audit Manager can produce evidence of regulatory and emerging AI standards compliance. Confidential Computing can help ensure data remains protected throughout the entire processing pipeline, reducing the risk of unauthorised access, even by privileged users or malicious actors within the system.

Additionally, for organisations looking to discover unsanctioned use of AI, or shadow AI, in their workforce, Chrome Enterprise Premium can provide visibility into end-user activity as well as prevent accidental and intentional exfiltration of sensitive data in gen AI applications.

*Archana Ramamoorthy is the senior director of product management at Google Cloud Security.​
 
Analyze

Analyze Post

Add your ideas here:
Highlight Cite Respond

Members Online

Latest Posts

Back
 
G
O
 
H
O
M
E