โ˜• Support Us โ˜•
[๐Ÿ‡ง๐Ÿ‡ฉ] - Artificial Intelligence-----It's challenges and Prospects in Bangladesh | Page 7 | PKDefense

[๐Ÿ‡ง๐Ÿ‡ฉ] Artificial Intelligence-----It's challenges and Prospects in Bangladesh

Reply (Scroll)
Press space to scroll through posts
G Bangladesh Defense
[๐Ÿ‡ง๐Ÿ‡ฉ] Artificial Intelligence-----It's challenges and Prospects in Bangladesh
63
3K
More threads by Saif


Mobile technology fuels AI adoption in Bangladesh, Telenor Asia study finds

bdnews24.com
Published :
Dec 09, 2025 14:38
Updated :
Dec 09, 2025 14:38

1765587329581.webp


Mobile phones are at the heart of Bangladeshโ€™s digital transformation, from accessing online learning and financial services to managing daily tasks and staying informed, according to the latest Telenor Asia study.

Increasingly, these everyday conveniences are powered by artificial intelligence, a technology that is rapidly reshaping how people live, work, and interact, the report said.

The "Digital Lives Decoded 2025: Building Trust in Bangladeshโ€™s AI Future" report, launched on Tuesday, revealed that about 96 percent of Bangladeshi internet users now report regularly using AI, marking an increase from 88 percent in 2024.

The fourth edition of the study, based on a survey of 1,000 internet users, explores the emergence of AI in Bangladesh and underscores the vital need for responsible, ethical, and safe AI adoption.

Telenor Asia chief Jon Omund said: โ€œAs mobile phones continue to transform daily life in Bangladesh, they have become powerful enablers of smarter, more connected communities. With the increasing everyday adoption of AI, telecom operators have a unique opportunity and responsibility to build the secure digital infrastructure that underpins trustworthy AI.โ€

He stressed that connectivity is the foundation, and trust must be built into every layer. "Telenor Asia remains committed to supporting Bangladesh's digital journey and ensuring that the benefits of mobile technology are accessible to all in a safe and secure way.โ€

The key findings from the report are as follows:

โ€˜MOBILES ENABLE SMARTER LIVES, BRING AI INTO DAILY REALITYโ€™

Mobile technology is reshaping daily life by enabling online learning (62 percent), remote work (54 percent), and financial management (50 percent). Over the past year, remote work has seen the biggest increase in mobile-based use, rising by 39 percent, followed by budgeting and expense tracking, which grew by 36 percent.

Generational differences are also evident. Millennials are more likely than others to recognise the value of AI-driven features, such as smart home management, health tracking, and voice assistants. These trends indicate that expanding mobile use is driving everyday AI adoption as people increasingly embrace the benefits it brings.

โ€˜TRUST IN AI TRANSLATES INTO OPTIMISM ON EDUCATION, ECONOMYโ€™

Almost 6 in 10 in Bangladesh now use some form of AI every day. Many are now turning to AI to assist in creating content for school, work or personal use, as well as looking for personalised advice in health, finances or planning new experiences.

The growing use of AI has been especially pronounced at work, in daily activities and online shopping, pointing to its deepening role in day-to-day routines.

People are especially trusting of AI-generated educational content and AI chatbots, and this trust translates into an optimism around AIโ€™s impact on education and the countryโ€™s economy.

โ€˜AN UNTAPPED OPPORTUNITY FOR AI IN WORKPLACEโ€™

AI usage at the workplace jumped from 44 percent to 62 percent in 2025. Yet only half of those using AI at work report that their organisations have a formal AI strategy, signalling room for greater institutional guidance on how this technology can be responsibly deployed.

Writing and creating content at work is one of the top use cases for AI, but there are more opportunities to encourage professionals to use AI to increase their productivity.

Currently, only 28 percent use it to delegate mundane or administrative tasks at work. Greater awareness around the different applications of AI could help increase effective adoption in the workplace.

โ€˜YOUNGER GENERATIONS VOICE CONCERNSโ€™

Even as people embrace AI tools in their daily lives, concerns have emerged around personal over-reliance on AI, lack of job security and privacy issues.

Younger generations are the most likely to use AI multiple times a day and to describe themselves as comfortable or even experts in their use of the technology.

Yet, they are also the most likely to voice concerns about the pace of development.

This combination of optimism and caution reflects a population eager to embrace AI while demanding safeguards.

Jon Omund concluded, โ€œAlongside the optimism surrounding the potential of AI in Bangladesh, there is a pressing reality. As technology advances at pace, ensuring that everyone is connected and equipped to use these tools safely and effectively has never been more critical.

โ€œWithout access to connectivity or the skills to safely navigate the digital world, people are excluded from the digital ecosystem or left behind from the progress and opportunities that AI can enable. Our collective responsibility remains: continue working to bridge this divide and create a digital society where no one is left behind.โ€​
 

Will OpenAI be the next tech giant or next Netscape?

Three years after ChatGPT made OpenAI the leader in artificial intelligence and a household name, rivals have closed the gap and some investors are wondering if the sensation has the wherewithal to stay dominant.

Investor Michael Burry, made famous in the film "The Big Short," recently likened OpenAI to Netscape, which ruled the web browser market in the mid-1990s only to lose to Microsoft's Internet Explorer.

"OpenAI is the next Netscape, doomed and hemorrhaging cash," Burry said recently in a post on X, formerly Twitter.

Researcher Gary Marcus, known for being skeptical of AI hype, sees OpenAI as having lost the lead it captured with the launch of ChatGPT in November 2022.

The startup is "burning billions of dollars a month," Marcus said of OpenAI.

"Given how long the writing has been on the wall, I can only shake my head" as it falls.

Yet ChatGPT was a tech launch like no other, breaking all consumer product growth records and now boasting more than 800 million -- paid subscription and unpaid -- weekly users.

OpenAI's valuation has soared to $500 billion in funding rounds, higher than any other private company.

But the ChatGPT maker will end this year with a loss of several billion dollars and does not expect to be profitable before 2029, an eternity in the fast-moving and uncertain world of AI.

Nonetheless, the startup has committed to paying more than $1.4 trillion to computer chip makers and data center builders to build infrastructure it needs for AI.

The fierce cash burn is raising questions, especially since Google claims some 650 million people use its Gemini AI monthly and the tech giant has massive online ad revenue to back its spending on technology.

Rivals Amazon, Meta and OpenAI-investor Microsoft have deep pockets the ChatGPT-maker cannot match.

A charismatic salesman, OpenAI chief executive Sam Altman flashed rare annoyance when asked about the startup's multi-trillion-dollar contracts in early November.

A few days later, he warned internally that the startup is likely to face a "turbulent environment" and an "unfavorable economic climate," particularly given competitive pressure from Google.

And when Google released its latest model to positive reactions, Altman issued a "red alert," urging OpenAI teams to give ChatGPT their best efforts.

OpenAI unveiled its latest ChatGPT model last week, that same day announcing Disney would invest in the startup and license characters for use in the bot and Sora video-generating tool.

OpenAI's challenge is inspiring the confidence that the large sums of money it is investing will pay off, according to Foundation Capital partner Ashu Garg.

For now OpenAI is raising money at lofty valuations while returns on those investments are questionable, Garg added.

Yet OpenAI still has the faith of the world's deepest-pocketed investors.

"I'm always expecting OpenAI's valuation to come down because competition is coming and its capital structure is so obviously inappropriate," said Pluris Valuation Advisors president Espen Robak.

"But it only seems to be going up."

Opinions are mixed on whether the situation will result in OpenAI postponing becoming a publicly traded company or instead make its way faster to Wall Street to cash in on the AI euphoria.

Few AI industry analysts expect OpenAI to implode completely, since there is room in the market for several models to thrive.

"At the end of the day, it's not winner take all," said CFRA analyst Angelo Zino.

"All of these companies will take a piece of the pie, and the pie continues to get bigger," he said of AI industry frontrunners.

Also factored in is that while OpenAI has made dizzying financial commitments, terms of deals tend to be flexible and Microsoft is a major backer of the startup.​
 

Ethical AI no longer optional

FROM mobile banking to public services, artificial intelligence is quietly shaping decisions that affect millions of lives in Bangladesh and beyond. These systems promise speed and efficiency and in many cases they deliver exactly that. Yet without proper governance, they can also exclude, discriminate and erode public trust. As AI adoption accelerates across sectors, ethical use is an immediate necessity. The scale and speed of these tools are unprecedented. But alongside these gains comes a fundamental question: who is accountable when machines influence decisions that directly affect peopleโ€™s lives?

Using AI ethically means ensuring that technology serves people, not the other way around. In healthcare, AI can assist in detecting diseases earlier and improving treatment outcomes. Yet when systems are trained on biased or incomplete data, certain patients may be misdiagnosed, overlooked or excluded. In education, AI can personalise learning pathways, but without safeguards it can compromise student privacy or widen existing inequalities between those with access to high-quality digital tools and those without.

These risks are no longer hypothetical. The consequences of unethical AI use are already visible. Facial recognition technologies, now deployed by law enforcement and public agencies in many countries, have repeatedly been shown to misidentify people with darker skin tones at higher rates. In documented cases, innocent individuals have been questioned, detained or publicly embarrassed because an algorithm produced a false match. The technology functioned as designed, but the absence of ethical oversight turned efficiency into injustice.

Automated hiring systems provide another warning. Some AI-powered recruitment tools were trained on historical data reflecting past hiring patterns that favoured certain genders, institutions or social backgrounds. As a result, qualified candidates were filtered out silently, never knowing why their applications failed. When discrimination is embedded inside an algorithm, it becomes harder to detect, challenge or correct. Bias does not disappear through automation; it becomes less visible and more difficult to contest.

Financial services perhaps illustrate the stakes most clearly. AI-driven credit scoring and risk assessment systems increasingly decide who can access loans, credit cards or even basic banking services. In many cases, people are denied support without any clear explanation or opportunity to appeal. For small business owners, informal workers and low-income households, such decisions can determine whether they move forward or fall behind. This is a failure of governance.

The issue is not artificial intelligence itself, but the lack of clear guardrails around its use. This is where AI governance becomes critical. In practical terms, governance means deciding who is responsible for automated decisions, how those decisions are reviewed and what happens when systems cause harm. It requires testing for bias, ensuring decisions can be explained in plain language and keeping humans accountable, especially when AI affects livelihoods, rights or access to essential services.

When governance is done well, AI stops being an untouchable โ€˜black boxโ€™ and becomes something people can question and trust. In healthcare, it ensures doctors, not algorithms, make final clinical decisions. In banking, it allows customers to understand and challenge automated credit assessments. In public services, it means citizens are informed when algorithms are used and are given a clear route to appeal outcomes.

Bangladesh offers a timely example of why this matters. As digital financial services expand rapidly, banks and financial institutions are increasingly relying on artificial intelligence and advanced analytics for fraud detection, transaction monitoring, customer profiling and credit assessment. Automation is unavoidable at this scale. However, discussions and research initiatives led by the Bangladesh Institute of Bank Management have repeatedly highlighted a gap: while the use of data analytics and AI is growing across the banking sector, governance, explainability and customer awareness have not always kept pace. The need for stronger internal policies, clearer accountability and closer alignment with regulatory expectations has been consistently emphasised.

For banks, good AI governance can begin with practical steps. Institutions should develop clear internal policies defining where AI can be used, where it should not be used and which decisions must always involve human review such as loan rejections, account freezes or high-risk customer classifications. Customers should receive plain-language explanations for automated decisions, along with simple and accessible appeal mechanisms. Regular model validation, bias testing and documentation should be treated as standard risk controls, no different from credit, operational or cybersecurity risk management.

These measures are not separate from regulation. They align closely with existing expectations set by Bangladesh Bank, which already emphasises strong risk management, customer protection, internal controls, data security, model validation and information and communication technology governance. Ethical AI governance complements current laws and supervisory frameworks, including data protection obligations and consumer protection requirements. Rather than adding complexity, it can be integrated into existing compliance processes.

The same principles apply beyond banking. In healthcare, AI tools should support clinical judgement, not replace it. In education, student data must be protected and automated assessments must be fair, explainable and open to review. In public services, transparency is essential. Citizens should know when algorithms influence decisions and should have the right to question outcomes that affect their access to services or opportunities.

Effective AI governance does not require massive investment or complex systems. Many safeguards are simple. Organisations can require human oversight for high-impact decisions, conduct regular bias and impact checks and provide clear explanations for automated outcomes. Assigning clear responsibility through a designated AI or ethics lead ensures accountability remains with people, not systems. In some countries, public agencies now maintain algorithm registers that openly list where automated decision-making is used, significantly improving public confidence.

Internationally, momentum is building. Bodies such as the Organisation for Economic Co-operation and Development and the United Nations Educational, Scientific and Cultural Organization have developed principles centred on fairness, transparency and human oversight. The Institute of Electrical and Electronics Engineers has issued ethical guidelines for responsible AI design. In Europe, the General Data Protection Regulation protects individuals from harmful automated decision-making and misuse of personal data. More recently, the ISO/IEC 42001 standard has provided organisations with a structured way to manage AI risks, similar to approaches used for financial or cybersecurity risks.

AI is advancing faster than laws, institutions and public awareness can adapt. Without effective governance, weak oversight risks deepening inequality, damaging trust and undermining the benefits of digital transformation. Governments must balance innovation with protection. Organisations must embed ethical thinking into AI systems from the outset, not as an afterthought. Professionals must remember that AI is a tool, not a substitute for human judgement. Oversight, accountability and common sense still matter. With strong governance, transparency and shared values, AI can serve society supporting inclusion, strengthening trust and delivering real benefits across Bangladesh and beyond.

BM Zahid ul Haque is an information security officer.​
 

Announcing AI Protection: Security for the AI era

Archana Ramamoorthy
Published: 14 Dec 2025, 19: 49

1766451625126.webp


As AI use increases, security remains a top concern, and we often hear that organisations are worried about risks that can come with rapid adoption. Google Cloud is committed to helping our customers confidently build and deploy AI in a secure, compliant, and private manner.

Today, weโ€™re introducing a new solution that can help you mitigate risk throughout the AI lifecycle. We are excited to announce AI Protection, a set of capabilities designed to safeguard AI workloads and data across clouds and models โ€” irrespective of the platforms you choose to use.

AI Protection helps teams comprehensively manage AI risk by:
  • Discovering AI inventory in your environment and assessing it for potential vulnerabilities​
  • Securing AI assets with controls, policies, and guardrails​
  • Managing threats against AI systems with detection, investigation, and response capabilities​

AI Protection is integrated with Security Command Center (SCC), our multicloud risk-management platform, so that security teams can get a centralised view of their AI posture and manage AI risks holistically in context with their other cloud risks.

1766451742045.webp

AI Protection helps organisations discover AI inventory, secure AI assets, and manage

AI threats, and is integrated with Security Command Center.
AI Protection helps organisations discover AI inventory, secure AI assets, and manage AI threats, and is integrated with Security Command Center.

Discovering AI inventory

Effective AI risk management begins with a comprehensive understanding of where and how AI is used within your environment. Our capabilities help you automatically discover and catalog AI assets, including the use of models, applications, and data โ€” and their relationships.

Understanding what data supports AI applications and how itโ€™s currently protected is paramount. Sensitive Data Protection (SDP) now extends automated data discovery to Vertex AI datasets to help you understand data sensitivity and data types that make up training and tuning data. It can also generate data profiles that provide deeper insight into the type and sensitivity of your training data.

Once you know where sensitive data exists, AI Protection can use Security Command Centerโ€™s virtual red teaming to identify AI-related toxic combinations and potential paths that threat actors could take to compromise this critical data, and recommend steps to remediate vulnerabilities and make posture adjustments.

Securing AI assets

Model Armor, a core capability of AI Protection, is now generally available. It guards against prompt injection, jailbreak, data loss, malicious URLs, and offensive content. Model Armor can support a broad range of models across multiple clouds, so customers get consistent protection for the models and platforms they want to use โ€” even if that changes in the future.

1766451936569.webp

Model Armor provides multi-model, multicloud support for generative AI applications.

Today, developers can easily integrate Model Armorโ€™s prompt and response screening into applications using a REST API or through an integration with Apigee. The ability to deploy Model Armor in-line without making any app changes is coming soon through integrations with Vertex AI and our Cloud Networking products.

"We are using Model Armor not only because it provides robust protection against prompt injections, jailbreaks, and sensitive data leaks, but also because we're getting a unified security posture from Security Command Center. We can quickly identify, prioritise, and respond to potential vulnerabilities โ€” without impacting the experience of our development teams or the apps themselves. We view Model Armor as critical to safeguarding our AI applications and being able to centralise the monitoring of AI security threats alongside our other security findings within SCC is a game-changer," said Jay DePaul, chief cybersecurity and technology risk officer, Dun and Bradstreet.

Organisations can use AI Protection to strengthen the security of Vertex AI applications by applying postures in Security Command Center. These posture controls, designed with first-party knowledge of the Vertex AI architecture, define secure resource configurations and help organisations prevent drift or unauthorised changes.

Managing AI threats

AI Protection operationalises security intelligence and research from Google and Mandiant to help defend your AI systems. Detectors in Security Command Center can be used to uncover initial access attempts, privilege escalation, and persistence attempts for AI workloads. New detectors to AI Protection based on the latest frontline intelligence to help identify and manage runtime threats such as foundational model hijacking are coming soon.

"As AI-driven solutions become increasingly commonplace, securing AI systems is paramount and surpasses basic data protection. AI security โ€” by its nature โ€” necessitates a holistic strategy that includes model integrity, data provenance, compliance, and robust governance,โ€ said Dr. Grace Trinidad, research director, IDC.

โ€œPiecemeal solutions can leave and have left critical vulnerabilities exposed, rendering organisations susceptible to threats like adversarial attacks or data poisoning, and added to the overwhelm experienced by security teams. A comprehensive, lifecycle-focused approach allows organisations to effectively mitigate the multi-faceted risks surfaced by generative AI, as well as manage increasingly expanding security workloads. By taking a holistic approach to AI protection, Google Cloud simplifies and thus improves the experience of securing AI for customers," she said.

Complement AI Protection with frontline expertise

The Mandiant AI Security Consulting Portfolio offers services to help organisations assess and implement robust security measures for AI systems across clouds and platforms. Consultants can evaluate the end-to-end security of AI implementations and recommend opportunities to harden AI systems. We also provide red teaming for AI, informed by the latest attacks on AI services seen in frontline engagements.

Building on a secure foundation

Customers can also benefit from using Google Cloudโ€™s infrastructure for building and running AI workloads. Our secure-by-design, secure-by-default cloud platform is built with multiple layers of safeguards, encryption, and rigorous software supply chain controls.

For customers whose AI workloads are subject to regulation, we offer Assured Workloads to easily create controlled environments with strict policy guardrails that enforce controls such as data residency and customer-managed encryption. Audit Manager can produce evidence of regulatory and emerging AI standards compliance. Confidential Computing can help ensure data remains protected throughout the entire processing pipeline, reducing the risk of unauthorised access, even by privileged users or malicious actors within the system.

Additionally, for organisations looking to discover unsanctioned use of AI, or shadow AI, in their workforce, Chrome Enterprise Premium can provide visibility into end-user activity as well as prevent accidental and intentional exfiltration of sensitive data in gen AI applications.

*Archana Ramamoorthy is the senior director of product management at Google Cloud Security.​
 

Members Online

Latest Posts

Latest Posts